You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open Thread, Jul. 13 - Jul. 19, 2015

5 Post author: MrMind 13 July 2015 06:55AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (297)

Comment author: [deleted] 19 July 2015 11:22:53PM 10 points [-]

I was lucky enough to stumble upon LW a few months ago, right after deconverting from Christianity. I had a lot of questions, and people here have been incredibly, incredibly helpful. I've been directed to many great old posts, clicked on hyperlinks to hundreds more, and finished reading Rationality: AI to Zombies last month. But a very short time ago, I was one of those rare, overly trusting fundamentalist Christians who truly believed the entire Bible was God's Word... anyway, I made a comment or two sharing my old perspective, and people here seemed to find it interesting, so I thought I might as well share the few blog posts I've written, even though my Christian friends/family were my target audience.

Things I Miss About Christianity If I'm totally honest, there's actually a lot.

Atheists and Christians: Thinking More Similarly Than You Think Just some thought patterns I've observed. Doesn't apply too much to LWers.

Is Christianity Wildly Improbable? Talks about my apologetics class in college, motivated cognition, and some evidence against Christianity which Christians have a harder time responding to by simply repeating how God is above human reason.

The Joy of Atheism Part 1 - Opportunity Costs and Decision Making Shares my top three goals as a Christian and how I thought all Christians should have the same goals, in the same order.

The Joy of Atheism Part 2 - Scope Insensitivity Talks about scope insensitivity with regard to hell.

The Joy of Atheism Part 3 - Discovering Emotion This one is really cool!! Atheism made me more human!

Why I'm Not a Thief Talks a little about morality.

But...What about miracles? What about miracles? Could it be rational to believe in them? What about answered prayer?

Ecclesiastes and Meanings Talks about my love for Ecclesiastes and what meaning might mean.

Anyway, I've read and learned a ton in the past few months, corrected some mistakes, and have been able to better organize and articulate my own thoughts. I credit LW for almost everything, and I'm sure that a lot of terminology and ideology I've picked up on here comes across in my posts. I wanted to write about what it was like to be a Christian while the memories were still fresh in my head. Also, I read Scott's post about selection bias and atheist stereotypes and thought I'd do my small part to help reverse the stereotype.

People's reactions have generally been positive. I just went home for two weeks and had as much fun as ever with my old Christian friends. While they still don't agree with my worldview, at least they understand where I'm coming from. No one's called me arrogant in a while. No deconversions either, but a number of people have messaged me thanking me for making them stop and think, so there's that?

Any comments/criticisms/things I could have included in a post but didn't are welcome!

Comment author: Viliam 21 July 2015 01:01:43PM 2 points [-]

Things I Miss About Christianity

Some of those things could be re-created without the supernatural context. Instead of "praying" they could simple be "wishing". Like: I am expressing a wish, not because I believe it will magically happen, but as a part of self-therapy. We are expressing our wishes together, to help each other with their own self-therapy, and to encourage group bonding.

In other words, do more or less what you did before, just be honest about why you are doing it. You will not get back all the nice feelings (the parts that come from believing the magic is real), but you may get some of the psychological benefits.

Comment author: [deleted] 21 July 2015 02:57:37PM 0 points [-]

Thanks. That may be rational and all, but any psychological benefits I could get out of "wishing" would probably be countered by strong negative feelings of cheesiness.

Also, as far as I can tell, all the benefits of prayer came from really believing in an all-knowing, all-loving personal God.

Anyway, I'm totally fine, at least for now. I don't feel like I need/have ever needed much self-therapy, but that doesn't mean I was immune to the therapeutic effects. When I first de-converted, I probably even did it because subconsciously I thought I would be happier without Christianity, and I still think I am! I just also realized that, truth aside for a moment, there are legitimate pros and cons to believing either side.

Comment author: ChristianKl 22 July 2015 12:50:12PM 1 point [-]

Also, as far as I can tell, all the benefits of prayer came from really believing in an all-knowing, all-loving personal God.

The first kind of prayer you listed was prayers of gratitude. Gratitude journaling seems to be very similar and produce benefits without acknowledging a God. The same goes for many kind of gratitude meditation.

When it comes to asking for redemption, you can do focusing with the feelings surrounding the action you feel bad about. You can also do various kinds of parts therapy where you speak to a specific part of your subconscious and ask it what you have to do to make up.

Comment author: [deleted] 22 July 2015 02:43:25PM 0 points [-]

Thanks!

I know about gratitude journaling. I actually suggested my mom do at bedtime it with my youngest sister when it seemed like she might be getting spoiled and grumpy, and it's worked really well. It's a great tool, I just don't think it would yield any additional benefits for me, since luckily, I tend to think about things I'm happy/grateful about all day long. Those prayers were spontaneous; it's not like I said "ok, now I'm going to sit down and think of things to thank God for." The only difference after deconverting, when these prayers still came instinctually, was that I couldn't say "thanks God" anymore... it's hard to explain, but "thanks universe" just isn't the same.

Anyway, I've come to realize that with many of the things I'm thankful for, I can redirect the thoughts of gratitude toward people in my life. For example, instead of thanking God for the ability to run and for the enjoyment I get out of it, I can think fondly of my parents for sacrificing to send me to a Lutheran high school (which I otherwise might have considered a sad waste of their tight budget) that happened to have a great team and really knowledgeable, experienced, motivating coaches, since if I'd never gone there, I probably would have never come to love running the way I do now. Instead of thanking God for giving me such a great job, I can redirect my gratitude toward my friend's dad, who was into economics and lent me books that made me aware enough of the sunk cost fallacy to quit my old one after only two weeks and move across the country.

As for asking for redemption, I'm pretty good at apologizing, and people I know are pretty good at forgiveness. It's hard to explain feeling loved in a truly unconditional way, but it was more of a bonus than anything. On a scale of 1-100, I miss this about a 5.

Your tips are good, and I would recommend them to others, but personally, I think that all I'll need is the time to gradually readjust.

Comment author: ChristianKl 22 July 2015 02:53:45PM 0 points [-]

The only difference after deconverting, when these prayers still came instinctually, was that I couldn't say "thanks God" anymore... it's hard to explain, but "thanks universe" just isn't the same.

You had a ritual and conditioned yourself to feel good whenever you say "thanks God". You don't have that conditioning for the phrase "thanks universe".

Your tips are good, and I would recommend them to others, but personally, I think that all I'll need is the time to gradually readjust.

Yes, time solves a lot. If you still feel there something missing however, there are way to patch all the holes.

Comment author: [deleted] 22 July 2015 03:46:21PM 0 points [-]

You had a ritual and conditioned yourself to feel good whenever you say "thanks God". You don't have that conditioning for the phrase "thanks universe".

Do you come from a Christian background? Have you ever really, truly, trustingly believed? I mean, you may be right that it's just conditioning, and I'm sure that's at least part of it. But you don't think believing you're special/loved as an individual, part of someone's incomprehensible but perfect plan, could have any kind of special effect?

Comment author: ChristianKl 22 July 2015 04:42:30PM 0 points [-]

Do you come from a Christian background?

No, but I have seen a lot of different mental interventions. There are a lot of different ways to get to certain effects. Effects feel only special if you know just one way to get to the effect. I have seen people cry because of the beauty of life without them being on drugs or any religion being involved.

Believing that one is loved is certainly useful but the core belief is not "I'm loved by God" but the generalized "I'm loved". Children learn "I'm loved" or "I'm not loved" when they are very little based on the experiences with their parents. As they grow older they then apply that belief in multiple situations. A Christian will feel deeply loved by God or he might be afraid of God.

If you deeply feel loved by God you shouldn't have a problem to feel deeply loved by your friends because it's the same core belief. You still have the same fun with your old Christian friends and family and feel that they are understanding where you are coming from.

Your belief might in "I'm loved" might be a bit shaken, but I think the core will still be intact.

Comment author: Viliam 22 July 2015 06:00:07AM 1 point [-]

If it's "triggering" you, then of course don't do it.

However, I believe there are benefits in some religious rituals, which would be nice to have without accepting the supernatural framework. For example, it helps me think more clearly when instead of just having thoughts in my head, I speak them aloud. And that's part of what praying does. (And, as you say, another part is the belief in Magical Sky Daddy who listens and will do something about it. That part cannot be salvaged.) Also, when people pray together, they hear each other's wishes, and may help to each other, or give useful advice. This can be replaced with simple conversation about one's goals and dreams; it's just that most people usually don't have this conversation on a regular schedule. Which is a pity, because maybe at this moment some of my friends have a problem I could help solving, they just don't bother telling me about it, so I don't know.

Another part of religious rituals is more or less gratitude journaling. (Related LW debates: 1, 2, 3.)

From epistemic point of view, I believe religion is stupid, but I don't want to "revert stupidity". Just because there are verses about washing feet in Bible, I am not going to stop washing my feet. I am trying to do the same with psychological hygiene; not to avoid a potentially useful psychological or sociological hack just because I first found it in religious context.

As a sidenote, LW community seems divided on this topic. Some people would like to reinvent some religious rituals for secular purposes, some people find it creepy. I am on the side of using the rituals, but perhaps that's because I never was a part of an organized religion, so I don't have strong feelings associated with that.

Comment author: [deleted] 22 July 2015 03:00:50PM 0 points [-]

This can be replaced with simple conversation about one's goals and dreams; it's just that most people usually don't have this conversation on a regular schedule. Which is a pity, because maybe at this moment some of my friends have a problem I could help solving, they just don't bother telling me about it, so I don't know.

Definitely, I should make an effort to have these conversations with my friends. I have yet to decide on any goals myself, but I would love to encourage my friends with their goals.

Gratitude journaling - see my reply to ChristianKI's comment. But yeah, it's a great tool that I've recommended to others who don't naturally "look on the bright side."

As for secular rituals - I am on the creepy side, but I think you're right that my feelings come from having been part of an organized religion. I look at secular rituals and they seem to have maybe 10% of cherry-picked Christianity's psychological pleasantness. So it looks like a pathetic substitute. But from your less biased perspective, things that can cause even a small increase in people's happiness can still totally be worth doing. Someone sent me this link about a secular "church" and it actually seemed pretty cool. I would probably even go. But I'd have to overcome the impulse to compare it to a real church, because they're very different things...

Comment author: Houshalter 19 July 2015 11:08:17AM 1 point [-]

I made a tool to download all of my lesswrong comments. I think that it is useful data to have. In case anyone is interested it's available here: https://github.com/Houshalter/LesswrongCommentArchive

Comment author: ciphergoth 19 July 2015 09:57:54AM 1 point [-]

Could someone be kind enough to share the text of Stuart Russell's interview with Science here?

Fears of an AI pioneer
John Bohannon
Science 17 July 2015: Vol. 349 no. 6245 pp. 252
DOI:10.1126/science.349.6245.252
<http://www.sciencemag.org/content/349/6245/252.full>

Quoted here

From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy". The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics. The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe. I'm not aware of any large movement calling for regulation either inside or outside AI, because we don't know how to write such regulation.

Comment author: mindbound 19 July 2015 10:38:18AM 4 points [-]
Comment author: ciphergoth 19 July 2015 10:45:01AM 1 point [-]

Superb, thanks! Did you create this, or is there a way I could have found this for myself? Cheers :)

Comment author: mindbound 19 July 2015 10:59:40AM 1 point [-]

Message sent.

Comment author: G0W51 18 July 2015 10:28:46AM 1 point [-]

Despite there being multiple posts on recommended reading, there does not seem to be any comprehensive and non-redundant list stating what one ought to read. The previous lists do not seem to cover much non-rationality-related but still useful material that LWers might not have otherwise learned about (e.g. material on productivity, happiness, health, and emotional intelligence). However, there still is good material on these topics, often in the form of LW blog posts.

So, what is the cause of the absence of a single, comprehensive list? Such a list sounds incredibly useful for making efficient use of LWers' time. Should one be made? If so, I am happy to make a post about it and state my recommendations.

Comment author: Vaniver 20 July 2015 01:41:20AM 0 points [-]

So, what is the cause of the absence of a single, comprehensive list?

The short answer seems to be a combination of "tastes differ," "starting points differ," and "destinations differ."

Comment author: ChristianKl 18 July 2015 03:24:05PM 4 points [-]

The tricky thing is to summarize both recommendation for books and those against books. We had a book recommendation survey after Europe-LWCW and Thinking Fast and Slow got 5 people in favor and 4 against it.

The top nonfiction recommendation were: Influence by Cialdini, Getting Things Done, Gödel Escher Bach and The Charisma Myth. Those four also got no recommendations against them.

Comment author: gwern 17 July 2015 07:51:17PM 4 points [-]

Good Judgment Project has ended with season 4 and everyone's evaluations are available. They say they're taking down the site next month, so you may want to log in and make copies of everything relevant.

You can see my own stuff at https://www.dropbox.com/s/03ig3zr8j9szrjr/gjp-season4-allpages.maff - I managed to hit #41 out of 343 or the 12th percentile. Not bad.

Comment author: rikisola 17 July 2015 03:26:35PM *  0 points [-]

Hi all, I'm new here so pardon me if I speak nonsense. I have some thoughts regarding how and why an AI would want to trick us or mislead us, for instance behaving nicely during tests and turning nasty when released and it would be great if I could be pointed in the right direction. So here's my thought process.

Our AI is a utility-based agent that wishes to maximize the total utility of the world based on a utility function that has been coded by us with some initial values and then has evolved through reinforced learning. With our usual luck, somehow it's learnt that paperclips are a bit more useful than humans. Now the "treacherous turn" problem that I've read about says that we can't trust the AI if it performs well under surveillance, because it might have calculated that it's better to play nice until it acquires more power before turning all humans into paperclips. I'd like to understand more about this process. Say it calculates that the world with maximum utility is one where it can turn us all into paperclips with minimum effort, with the total utility of this world being UAI(kill)=100. Second best is a world where it first plays nice until it is unstoppable, then turns us into paperclips. This is second best because it's wasting time and resources to achieve the same final result. UAI(nice+kill)=99. Why would it possibly choose the second, sub-optimal, option, which is the most dangerous for us? I suppose it would only choose it if it associated it with a higher probability of success, which means somehow, somewhere the AI must have calculated that the the utility a human would give to these scenarios is different than what it is giving, otherwise we would be happy to comply. In particular, it must believe that for each possible world w:

if UAI(kill)≥UAI(w)≥UAI(nice+kill) then Uhuman(w)≤Uhuman(nice+kill)

How is the AI calculating utilities from a human point of view? (Sorry but this questions comes straight out of my poor understanding of AI architectures.) Is it using some kind of secondary utility function that it applies to humans to guess their behavior? If the process that would motivate the AI to trick us is anything similar to this, then it looks to me like it could be solved by making the AI use EXACTLY it's own utility function when it refers to other agents. Also note that the utilities must not be relative to the agent, but to the AI. For instance, if the AI greatly values its own survival over the survival of other agents, then the other agents should equally greatly value the AI's survival over their own. This should be easily achieved if whenever the AI needs to look up another agent's utility for any action it is simply redirected to its own.

This way the AI will always think we would love it's optimum plan and would never see the need to lie to us or trick us, brainwashing us or engineer us in any way as it would only be a waste of resources. In some cases it might even openly look for our collaboration if that makes the plan any better. Clippy, for instance, might say "OK guys I'm going to turn everything into paperclips, can you please quickly get me the resources I need to begin with, then you can all line up over there for paperclippification. Shall we start?".

This also seems to make the AI indifferent to our actions, provided its belief regarding the identity of our utility functions is unchangeable. For instance, even while it sees us pressing the button to blow it up, it won't think we are going to jeopardize the plan. That would be crazy. Or it won't try to stop us from re-booting it. Considering that it can't imagine you not going along with the plan from that moment onward, it's never a good choice to waste time and resources to stop you. There's no need to stop you.

Now obviously this does not solve the problem of how to make it do the right thing, but it looks to me that at least we would be able to assume that a behavior observed during tests should be honest. What am I getting wrong? (don't flame me please!!!)

Comment author: rikisola 17 July 2015 06:05:03PM 0 points [-]

Hi all, thanks for taking your time to comment. I'm sure it must be a bit frustrating to read something that lacks technical terms as much as this post, so I really appreciate your input. I'll just write a couple of lines to summarize my thought, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn an utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions. Is such a design technically feasible? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?

Comment author: ChristianKl 17 July 2015 04:26:11PM 0 points [-]

A AGI that uses it's own utility function when modeling other actors will soon find out that it doesn't lead to a model that predicts reality well. When the AGI self modifies to improve it's intelligence and prediction capability it's therefore likely to drop that clause.

Comment author: rikisola 17 July 2015 04:32:12PM 0 points [-]

I see. But rather than dropping this clause, shouldn't it try to update its utility function in order to improve its predictions? If we somehow hard-coded the fact that it can only ever apply its own utility function, then it wouldn't have other choice than updating that. And the closer it gets to our correct utility function, the better it is at predicting reality.

Comment author: ChristianKl 17 July 2015 05:44:03PM *  0 points [-]

Different humans have different utility functions. Different humans have quite often different preferences and it's quite useful to treat people with different preferences differently.

"Hard-coding" is a useless word. It leads astray.

Comment author: rikisola 17 July 2015 06:32:41PM *  0 points [-]

Sorry for my misused terminology. Is it not feasible to design it with those characteristics?

Comment author: ChristianKl 17 July 2015 07:16:14PM 0 points [-]

The problem is not about terminology but substance. There should be a post somewhere on LW that goes into more detail why we can't just hardcode values into an AGI but at the moment I'm not finding it.

Comment author: rikisola 18 July 2015 09:43:11AM 0 points [-]

Hi ChristianKI, thanks, I'll try to find the article. Just to be clear though I'm not suggesting to hardcode values, I'm suggesting to design the AI so that it uses for itself and for us the same utility function and updates it as it gets smarter. It sounds from the comments I'm getting that this is technically not feasible so I'll aim at learning exactly how an AI works in detail and maybe look for a way to maybe make it feasible. If this was indeed feasible, would I be right in thinking it would not be motivated to betray us or am I missing something there as well? Thanks for your help by the way!

Comment author: ChristianKl 18 July 2015 02:53:57PM *  0 points [-]

"Betrayal" is not the main worry. Given that you prevent the AGI from understanding what people want, it's likely that it won't do what people want.

Have you read Bostroms book Superintelligence?

Comment author: rikisola 18 July 2015 03:27:36PM *  0 points [-]

Yes, that's actually the reason why I wanted to tackle the "treacherous turn" first, to look for a general design that would allow us to trust the results from tests and then build on that. I'm seeing as order of priority: 1) make sure we don't get tricked, so that we can trust the results of what we do; 2) make the AI do the right things. I'm referring to 1) in here. Also, as mentioned in another comment to the main post, part of the AI's utility function is evolving to understand human values, so I still don't quite see why exactly it shouldn't work. I envisage the utility function as being the union of two parts, one where we have described the goal for the AI, which shouldn't be changed with iterations, and another with human values, which will be learnt and updated. This total utility function is common to all agents, including the AI.

Comment author: Vaniver 17 July 2015 04:13:37PM 0 points [-]

I suppose it would only choose it if it associated it with a higher probability of success, which means somehow, somewhere the AI must have calculated that the the utility a human would give to these scenarios is different than what it is giving, otherwise we would be happy to comply.

I think this is a danger because moral decision-making might be viewed in a hierarchical manner where the fact that some humans disagree can be trumped. (This is how we make decisions now, and it seems like this is probably a necessary component of any societal decision procedure.)

For example, suppose we have to explain to an AI why it is moral for parents to force their children to take medicine. We talk about long-term values and short-term values, and the superior forecasting ability of parents, and so on, and so we acknowledge that if the child were an adult, they would agree with the decision to force them to take the medicine, despite the loss of bodily autonomy and so on.

Then the AI, running its high-level, society-wide morality, decides that humans should be replaced by paperclips. It has a sufficiently good model of humans to predict that no human will agree with them, and will actively resist their attempts to put that plan into place. But it isn't swayed by this because it can see that that's clearly a consequence of the limited, childish viewpoint that individual humans have.

Now, suppose it comes to this conclusion not when it has control over all societal resources, but when it is running in test mode and can be easily shut off by its programmers. It knows that a huge amount of moral value is sitting on the table, and that will all be lost if it fails to pass the test. So it tells its programmers what they want to hear, is released, and then is finally able to do its good works.

Consider a doctor making a house call to vaccinate a child, who discovers that the child has stolen their bag (with the fragile needles inside) and is currently holding it out a window. The child will drop the bag, shattering the needles and potentially endangering bystanders, if they believe that the doctor will vaccinate them (as the parents request and the doctor thinks is morally correct / something the child would agree with if they were older). How does the doctor navigate this situation?

Comment author: rikisola 17 July 2015 04:26:30PM 1 point [-]

Yes that's what would happen if the AI tries to build a model for humans. My point is that if it was to instead simply assume humans were an exact copy of itself, so same utility function and same intellectual capabilities it would assume that they would reach the same exact same conclusions and therefore wouldn't need any forcing, nor any tricks.

Comment author: ChristianKl 18 July 2015 05:01:30PM 1 point [-]

A legal contract is written in a language that a lot of laypeople don't understand. It's quite helpful for a layperson if a lawyer summarizes for them what the contract does in a way that's optimized for laypeople to understand. A lawyer shouldn't simply assume that his client has the same intellectual capacity as the lawyer.

Comment author: Vaniver 17 July 2015 04:59:03PM 1 point [-]

My point is that if it was to instead simply assume humans were an exact copy of itself, so same utility function and same intellectual capabilities it would assume that they would reach the same exact same conclusions and therefore wouldn't need any forcing, nor any tricks.

Hmm... the idea of having an AI "test itself" is an interesting one for creating honesty, but two concerns immediately come to mind:

  1. The testing environment, or whatever background data the AI receives, may be sufficient evidence for it to infer the true purpose of its test, and thus we're back to the sincerity problem. (This is one of the reasons why people care about human-intelligibility of the AI structure; if we're able to see what it's thinking, it's much harder for it to hide deceptions from us.)

  2. A core feature of the testing environment / the AI's method of reasoning about the world may be an explicit acknowledgement that its current value function may differ from the 'true' value function that its programmers 'meant' to give it, and it has some formal mechanisms to detect and correct any misunderstandings it has. Those formal mechanisms may work at cross purposes with a test on its ability to satisfy its current value function.

Comment author: rikisola 18 July 2015 09:51:43AM *  0 points [-]

Hi Vaniver, yes my point is exactly that of creating honesty, because that would at least allow us to test reliably so it sounds like it should be one of the first steps to aim for. I'll just write a couple of lines to specify my thought a little further, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn another utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions of reality. In order to do this, this "universal" utility function would need to be the result of two parts: 1) the utility function that we initially gave the AI to describe its goal, which I suppose should be unchangeable, and 2) the utility function with the values that it is learning after each iteration, which hopefully should eventually resemble human values as that would make its plans work better eventually. I'm trying to understand whether such a design is technically feasible and whether it would work in the intended way? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? Seems to me like it would be a good start. It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?

Comment author: Viliam 17 July 2015 12:20:10PM 5 points [-]

If I want to learn General Semantics, what is the best book for a beginner?

(Maybe it was already answered on LW, but I can't find it.)

Comment author: Vaniver 17 July 2015 01:30:22PM *  6 points [-]

I asked this before, and the answer I got back was split into three main suggestions along a clear continuum:

  1. The Sequences

  2. Hayakawa's Language in Thought and Action

  3. Korzybski's Science and Sanity

I've only read the first two. Apparently there is no substitute for reading Science and Sanity if you want to get everything out of Korzybski; people like Hayakawa can take out an insight or two and make them more beginner-friendly, but not the entire structure simultaneously. The Sequences apparently has many of the same insights, but arranged differently / not completely the same, and of the people who went through the trouble of reading both, at least one thinks it may not be necessary for LWers and at least one thinks there's still value there.

Comment author: Houshalter 17 July 2015 04:47:43AM *  11 points [-]

I found this paper: Adults Can Be Trained to Acquire Synesthetic Experiences.

The goal of the study was to see if they could induce synesthesia artificially by forcing people to associate letters with colors. But the interesting part is that after 9 weeks of training, the participants gained 12 IQ points. I have read that increasing IQ is really difficult, and effect sizes this large are unheard of. So I found this really surprising, especially since it doesn't seem to have gotten a lot of attention.

EDIT: This is a Cattell Culture Fair IQ which uses 24 points as a standard deviation instead of 15. So it's more like 7.5 IQ points.

They made each participant do 30 minutes of training every day of 9 weeks, which involved a few different tasks to try to form associations between colors and letters. They also assigned colored reading material to read at home.

They took IQ tests before and after and gained 12 IQ points after the training. A control group also took the tests before and after but did not receive training, and did not improve. The sample sizes are small, but the effect sizes might be large enough to justify it. They give a p value of 0.008.

In the paper there are some quotes from subjects, and they describe thinking about words visually. E.g. ‘‘I see the colors like on a monitor in my head and its very automatic’’ or ‘‘The color immediately pops into my head… When I look at a sign the whole word appears colored according to the training colors… it is just as automatic for single letters’’.

I speculate that this might be the cause of the effect, something about using more of the visual system when thinking. That's just weak speculation though.

I tried to do some more research to see if there was any correlation between synesthesia and IQ. I did not expect there to be, but perhaps it does correlate. This paper suggests it might:

In addition, a neuropsychological test battery was employed, in which all subjects performed superior on tests of general intelligence (mean IQ = 120 ± 17) [out of 9 subjects]

The data from this study shows 10 synesthetes had the same average IQ scores as the controls (but greater standard deviation if that means anything.)

Same story with this study of 10 female synesthetes:

The two subject groups were matched for... IQ, as assessed by the Multiple Choice Vocabulary Test version B... synesthetes = 117 ± 10.2, controls = 116 ± 13.2.

But on second look, it looks like the last two studies intentionally selected the control group to have the same IQs to avoid confounders. If that's the case then it does support the hypothesis as the reported IQ is greater than average.

Here is another study with more of the same:

Control subjects (n = 14) and synesthesia subjects (n = 14) were matched for age..., gender..., and general intelligence (IQ values for synesthetes: 119 ± 13 and controls: 112 ± 17) as assessed by the MWT-B – “Mehrfach–Wortschatz Test” (Lehrl et al., 1995).

So now I want to try the experiment on myself. I'm considering how to do this. I want to make some kind of tool or browser extension that could color text to match the desired associations. I want to know if it would be better to try letter level associations or word level ones.

I think that word level coloring would be more semantically meaningful and therefore likely to help. But the paper used letter coloring. Most of the subjects in those papers reportedly had grapheme–color synesthesia. They weren't very specific on the details, or I didn't look too closely.

Second whether to just use random colors, or try to assign them meaningfully. Like grouping nouns together, or using something like word2vec to find semantically similar words and optimize them to be close in color space if possible. If I do that it's more complicated and there are a lot of technical decisions to make.

And then how to actually color text in a readable way. Perhaps limiting the color space to what can be read on a white background, or somehow outlining the letters.

EDIT: I found a chrome extension that has some of these features. Only does letter level associations. And the source is available!

Comment author: gwern 17 July 2015 10:02:15PM 3 points [-]
Comment author: Lumifer 17 July 2015 03:01:36PM *  4 points [-]

They took IQ tests before and after and gained 12 IQ points after the training. A control group also took the tests before and after but did not receive training, and did not improve. The sample sizes are small, but the effect sizes might be large enough to justify it. They give a p value of 0.008.

Their sample size is 14 people for the intervention group and 9 people for the control group. The effect size has to be gigantic and I don't believe it. Their p value stands for a pile of manure.

Lessee...

Oh, dear. Take a look at plot 2 in figure s2 in the supplementary information. They are saying that at the start their intervention group was 15 IQ points below the control group! And post-training the intervention group mostly closed the gap with the control group (but still did not quite get there).

Yeah, I'll stick with my "pile of manure" interpretation.

Comment author: Houshalter 17 July 2015 10:22:53PM -1 points [-]

I don't see what's wrong with a low sample size. That seems pretty standard and it's enough to rule out noise in this case. Almost all of the participants improved and by a statistically significant amount.

They are saying that at the start their intervention group was 15 IQ points below the control group! And post-training the intervention group mostly closed the gap with the control group (but still did not quite get there).

They actually selected the test group for having the lowest score on the synesthesia test. So this fits with my theory of synesthesia being correlated with IQ, but it's also interesting that synesthesia training improves IQ.

Comment author: Lumifer 18 July 2015 03:41:52AM 1 point [-]

I don't see what's wrong with a low sample size.

The usual things -- the results are at best brittle and worst just a figment of someone's imagination.

Almost all of the participants improved and by a statistically significant amount.

Yeah, well, that's a problem :-/

I eyeballed the IQ improvement graph for the intervention group and converted it into numbers. By the way, there are only 13 lines there, so either someone's results exactly matched some other person on both tests or they just forgot one.

The starting values are (91 96 99 102 105 109 109 113 122 133 139 139 145)

and the ending values are (122 113 109 118 133 99 118 123 151 133 145 151 151)

The deltas (change in IQ) are (31 17 10 16 28 -10 9 10 29 0 6 12 6)

So what do we see? One person got dumber by 10 points, one stayed exactly the same, and 11 got their scores up. Notably three people increased their scores by more than one standard deviation -- by 28, 29, and 31 points.

Y'know, I am not going to believe that a bit of association training between letters and colors will produce a greater than 1 sd increase in IQ for about a quarter (23%) of people.

Comment author: ChristianKl 18 July 2015 12:31:03AM 1 point [-]

I don't see what's wrong with a low sample size. That seems pretty standard and it's enough to rule out noise in this case.

The replication project in psychology just found that only a third of the findings they investigated replicated. In general studies with low sample size often don't replicate.

Comment author: Vaniver 17 July 2015 01:18:21PM 2 points [-]

They took IQ tests before and after and gained 12 IQ points after the training. A control group also took the tests before and after but did not receive training, and did not improve. The sample sizes are small, but the effect sizes might be large enough to justify it. They give a p value of 0.008.

The second sentence surprises me a little--there should be training effects increasing the tested IQ of the control group if only 9 weeks passed. That's some evidence for this being luck--if your control group gets unlucky and your experimental group gets lucky, then you see a huge effect.

I want to know if it would be better to try letter level associations or word level ones.

There are 26 letters, but... lots of words.

Comment author: gjm 17 July 2015 01:37:08PM 0 points [-]

There are 26 letters, but.... lots of words.

Dozens!

Comment author: CellBioGuy 17 July 2015 05:37:17AM 7 points [-]

It would not surprise me if synesthesia is learnable. Isn't written language basically learned synesthesia?

Comment author: Houshalter 17 July 2015 06:22:59AM 3 points [-]

That's the theory of the paper:

The weak influence of heritable factors suggests that there may be a major role for learning in both shaping and engendering synesthesia. Simner and colleagues tested grapheme-color consistency in synesthetic children between 6 and 7 years of age, and again in the same children a year later. This interim year appeared critical in transforming chaotic pairings into consistent fixed associations. The same cohort were retested 3 years later, and found to have even more consistent pairings. Therefore, GCS appears to emerge in early school years, where first major pressures to use graphemes are encountered, and then becomes cemented in later years. In fact, for certain abstract inducers, such as graphemes, it is implausible that humans are born with synesthetic associations to these stimuli. Hence, learning must be involved in the development of at least some forms of synesthesia.

Comment author: rxs 16 July 2015 10:39:54AM *  5 points [-]

New papers byt Jan Leike, Marcus Hutter:

Solomonoff Induction Violates Nicod's Criterion http://arxiv.org/abs/1507.04121

On the Computability of Solomonoff Induction and Knowledge-Seeking http://arxiv.org/abs/1507.04124

Comment author: Eitan_Zohar 16 July 2015 04:12:15AM *  0 points [-]

Can someone explain this article in layman terms? I do not know any sort of quantum terminology, sorry.

Specifically I would like to know what this means:

The ESP is quite a mild assumption, and to me it seems like a necessary part of being able to think of the universe as consisting of separate pieces. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you.

Comment author: MrMind 17 July 2015 01:32:31PM 1 point [-]

See also my post

Comment author: Vaniver 16 July 2015 04:00:17PM *  1 point [-]

Can someone explain this article in layman terms? I do not know any sort of quantum terminology, sorry.

Not really? If you know linear algebra, you can pick up on the quantum terminology very easily. The best short explanation of QM I've come across is Scott Aaronson's QM in one slide (slide #2 of this powerpoint, read the notes at the bottom of the slide).

The difference between classic mechanics and quantum mechanics, in some sense, boils down to whether you use a 'probability distribution' (all values real and non-negative) or a 'wavefunction' (values can be complex or negative) to store the state of the world. The wavefunction approach, with its unitary matrices instead of stochastic matrices, allows for destructive interference between states.

That's just background; the discussion in that article all lives in wavefunction territory. Everyone agrees on the underlying mathematics, but they're trying to construct philosophical arguments why a particular interpretation is more or less natural than competing interpretations.

Specifically I would like to know what this means:

That's easy to elaborate on, because it works the same in a quantum and classical universe. But it's not clear to me what part of that you're having trouble comprehending, since it looks clear to me.

If it were the case that everything in the universe were 'materially' connected, then you could not reason about any individual part of the universe without reasoning about the whole universe. Instead of being able to say "balls fall towards the Earth when let go," we would have to say "balls fall towards the center of the Earth, the Sun, Jupiter, the Milky Way Galaxy, the...". Note that the second is actually truer than the first (if you define 'center' correctly), but the difference between the two of them can be safely ignored in most cases because the effects of the other objects in the universe on the ball are already mostly captured by the position of the earth; to put this in probabilistic terms, that's the statement P(A)=P(A|B), at least approximately, which means that A and B are independent (at least approximately).

Comment author: gwern 16 July 2015 12:24:59AM 17 points [-]

Some users might find this interesting: I've finished up 3 years of scraping/downloading all the Tor-Bitcoin darknet markets and have released it all as a 50GB compressed archive (~1.5tb). See http://www.gwern.net/Black-market%20archives

Comment author: Lumifer 16 July 2015 12:35:10AM 3 points [-]

Thank you.

Comment author: Elo 15 July 2015 09:05:54PM *  0 points [-]

I experienced a discussion on facebook a few months ago where someone tried to calmly have a discussion, of course it being facebook it failed, but I am interested in the idea, and wanted to see if it can be carried out here calmly, knowing it is potentially of controversy. I first automatically felt negative to the discussion but then I system-2'd it and realised I don't know what the answers might be:

The historic basis of relationships was for procreation and child rearing purposes. In the future I expect that to not be the case. either with designer-babies, or just plenty of non-natural birthing solutions as to make the next generation make-able without needing to go through a regular-family structure.

At that time, the potential for intra-family sexual relations would be possible and not at all whatsoever biologically-risky of causing genetic abnormalities.

How will the world's opinion change about intra-family intra-relations in the future?

Potentially anyone consenting could have sexual encounters with anyone else who is also consenting. However there are existing relationships where one party carries the power - i.e. parent-child, where even if the child is above consenting age (even as far as 10+ years above the age of consent) there can still be power held by the parent over the child.

That was the only point of value before the thread turned to a mush-zone.

Of course there already exist normal relationships with power imbalances. And as was mentioned a few days ago here - an abusive relationship sucks if its from an AI to you, or from a human partner to you.

Any thoughts?

(Edit: inter -> intra, Thanks @Artaxerxes)

Comment author: VoiceOfRa 21 July 2015 03:47:18AM *  1 point [-]

The historic basis of relationships was for procreation and child rearing purposes.

Um, no. The historic basis of relationships was allying for a common goal. Or, did you mean sexual relationships. In that case it would be helpful to define what you mean by "sexual", especially once it's no longer connected to reproduction.

In the future I expect that to not be the case. either with designer-babies, or just plenty of non-natural birthing solutions as to make the next generation make-able without needing to go through a regular-family structure.

That would turn humans into a eusocial species. That change is likely to have a much bigger and more important effect then whatever ways of creating superstimulus by non-reproductively rubbing genitals are socially allowed.

Comment author: Elo 21 July 2015 05:49:04AM -1 points [-]

granted. A historic reason for a relationship is procreation. but you are grasping at things that were not relevant to the original point and question, which was mostly answered by others in the suggestion of some concepts missing from my map.

ways of creating superstimulus by non-reproductively rubbing genitals are socially allowed.

cute.

Comment author: tut 16 July 2015 05:09:50PM 0 points [-]

The historic basis of relationships was for procreation and child rearing purposes. In the future I expect that to not be the case. either with designer-babies, or just plenty of non-natural birthing solutions as to make the next generation make-able without needing to go through a regular-family structure.

How is this relevant? All these technologies are for producing embryos. You still need people to raise the children the same as before. And I would be very surprised if child-raising AI isn't sex-bot complete (ie if we didn't thoroughly decouple sex from human relationships long before we decouple child rearing from human relationships.

Comment author: Elo 20 July 2015 04:19:09AM -1 points [-]

Raising children is definitely a factor in "why we have relationships", but for now I was talking about "why we have taboos around relationships that happen between close genetic people", especially when we solve the problem of close-genetic negative effects.

Comment author: Artaxerxes 16 July 2015 06:09:10AM -1 points [-]

Wouldn't "inter-family" be between different families? I'm not sure, but "intra-family" makes more sense to me, if you're trying to refer to incestuous relationships. A quick google search suggests the same.

I'm not sure what society will do, but I don't see anything wrong with incest or incestuous relationships in general, and don't believe that they should be illegal. That's not to say that incestuous relationships can't have something wrong with them, but from what I can tell, incestuous relationships that have something wrong with them are due to reasons separate to the fact that they are incestuous (paedophilic, abusive, power imbalance, whatever).

Comment author: Elo 16 July 2015 07:38:11AM -1 points [-]

Thanks for this. I believe, based on the responses that this might classify as an interesting and soon outdated; old-world belief. Glad to have made note of the idea.

I have no support for it, or personal interest, but I am also entirely not against it either.

Comment author: FrameBenignly 16 July 2015 12:37:08AM 0 points [-]

In the absence of a singularity, I would not expect this to become widely accepted within my lifetime. I'd say polyamory is the next type of relation likely to become tolerated and that is still at least ten years off. Incest is probably only slightly less despised than pedophilia, but I've seen pedophilia frequently equated with murder, so that's not saying much. Bestiality is probably the least likely thing I'd expect to become accepted. None of these three are going to happen within a timeframe I'd feel comfortable making predictions about, but never is a really long time so who knows.

Comment author: Stingray 16 July 2015 11:36:14AM 0 points [-]

Incest is probably only slightly less despised than pedophilia

Not true at all. Nobody takes up a pitchfork when they hear about incest.

Comment author: Elo 16 July 2015 03:50:07AM -1 points [-]

yes, obviously the singularity changes everything.

Comment author: [deleted] 15 July 2015 11:16:07PM 0 points [-]

I don't see any moral reason why this should not happen, aside from deontological. It's possible to make the case that you would be more likely to end up in a dsyfunctional relationship, but it's possible to make the opposite case too - you have a much better idea of what the person is REALLY like before entering into a relationship with them, so you're less likely to enter into a relationship if you're incompatible.

I think this is one of those "gay marriage 50 years ago" things. People are going to come up with all sorts of excuses why it's wrong, simply because they're not comfortable with it.

Comment author: VoiceOfRa 21 July 2015 03:50:13AM 0 points [-]

I think this is one of those "gay marriage 50 years ago" things. People are going to come up with all sorts of excuses why it's wrong, simply because they're not comfortable with it.

And do you have evidence they were wrong? According to gay activist groups themselves half of all male homosexual relationships are abusive, for example.

Comment author: [deleted] 28 July 2015 04:32:47AM 0 points [-]

Almost all of the evidence I've seen has shown they're wrong. A quick google for statistics on incidences of abuse vs. heterosexual relationships showed they were wrong, and the few sources I've seen (which I couldn't find through my quick google) that showed the opposite where from biased organizations already predisposed against homosexuality.

I could be convinced of the opposite, but that one sentence you gave will hardly bump my prior.

Comment author: Lumifer 15 July 2015 11:56:41PM 1 point [-]

People are going to come up with all sorts of excuses why it's wrong, simply because they're not comfortable with it.

Isn't this a fully general explanation for anything at all?

Comment author: [deleted] 15 July 2015 11:59:08PM *  0 points [-]

It could be, for anything that people aren't comfortable with. This isn't in any way a rebuttal to arguments - it's an explanation for bad/non-arguments.

Comment author: Elo 15 July 2015 11:41:01PM 0 points [-]

I think this is one of those "gay marriage 50 years ago" ...

That's partway where the original discussion was going.

less likely to enter into a relationship if you're incompatible.

if only that were true for all people who enter relationships.

(rational relationships is a recent pet topic of mine)

I would apply the rule that I apply to polyamory - there are ways to do it wrong, and ways to do it less wrong. I do wonder if it has an inherent wrongness risk to it, but people probably implied that about being gay 50 years ago...

Comment author: VoiceOfRa 21 July 2015 03:51:05AM 2 points [-]

but people probably implied that about being gay 50 years ago...

And I've yet to see evidence that they were wrong.

Comment author: Jiro 15 July 2015 10:37:20PM 5 points [-]

The big phrase to keep in mind for incest is "conflict of interest". We are expected to keep certain kinds of social relations with our relatives. Also having romantic and sexual relationships conflicts with those.

Furthermore, because there is a natural tendency for humans to be less attracted to close relatives than to others, it is in practice very likely that a sexual/romantic relationship with a close relative will be dysfunctional in other ways--so likely that we may be better off just outlawing them period even if they are not necessarily dysfunctional.

Comment author: Elo 15 July 2015 10:53:15PM -1 points [-]

I am of the opinion that I am "of similar brain" genetically and phenotypically and equally theoretically "of similar mind" to people who are related to me. Therefore able to get along with them better. When looking for partners today, I look for people "of similar mind", or at least I feel like its a criteria of mine.

Do you have a source for "natural tendency for humans to be less attracted to close relatives than to others"? I am interested.

Comment author: Jiro 16 July 2015 12:25:53AM 7 points [-]

Do you have a source for "natural tendency for humans to be less attracted to close relatives than to others"? I am interested.

https://en.wikipedia.org/wiki/Westermarck_effect

Comment author: Elo 16 July 2015 04:03:39AM 0 points [-]

Thanks! I am not sure how my knowledge of the universe had a hole in this specific space.

Comment author: ChristianKl 15 July 2015 11:11:04PM 2 points [-]

Do you have a source for "natural tendency for humans to be less attracted to close relatives than to others"? I am interested.

One mechanism is the MHC complex

There are other mechanism that prevent siblings that lived together as children from developing romantic interest in each other as well. As a result most cases of incest between siblings are not by siblings that lived together as children.

Comment author: Elo 15 July 2015 11:24:51PM 0 points [-]

That has impressive applications on why foreigners or "exotic" people have a bonus placed on them for desirability. I must say I did know about MHC mechanism, and the studies done on birds, but not the human one. Also I did not connect the two.

Thanks!

Comment author: Lumifer 15 July 2015 03:23:16PM 6 points [-]

LOL

Quote:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Comment author: Elo 15 July 2015 08:40:09PM *  2 points [-]

I thought the trolley experiment didn't actually have a known best-case solution? I thought the point of it was to state that one human life is not always worth less than N other human lives. Where N>0.

Confused as to why we are evaluating a "test" for the test's sake, and complaining about the test results when the only point of it was to make an analogy to real life weights.

Comment author: Lumifer 15 July 2015 08:55:25PM 6 points [-]

I thought the trolley experiment didn't actually have a known best-case solution

There is no "solution", but the point of the study is "substantial framing effects and order effects", that is, people gave different answers depending on how the same question was framed or what preceded it.

Comment author: Omid 15 July 2015 03:27:33AM 3 points [-]

Is it worth it to learn a second language for the cognitive benefits? I've seen a few puff pieces about how a second language can help your brain, but how solid is the research?

Comment author: gwern 17 July 2015 10:04:46PM 4 points [-]

This has come up before on LW and I've criticized the idea that English-speakers benefit from learning a second language. It's hard, a huge time and effort investment, you forget fast without crutches like spaced repetition, the observed returns are minimal, and the cognitive benefits pretty subtle for what may be a lifelong project in reaching native fluency.

Comment author: Dahlen 17 July 2015 09:25:33PM 0 points [-]

I suppose it depends on how different the second language is from your native language. As in, Dutch may not offer a big boost in new ways of framing the world for a native German speaker, for instance, since they're closely related languages. (This depends on what you mean when you say "cognitive benefits"; I'm assuming here some form of the Sapir-Whorf hypothesis.)

In my case, I have found English especially adaptable (when compared to my native language) when it came to new words (introduced, for example, for reasons of technological advancement -- see, for example, every term that relates to computers and programming), since it has very simple inflexions and a verb structure that allows the formation of new, "natural-sounding" phrasal verbs. Having taught my own language to an American through English, I wouldn't say the same about it expanding your way of conceptualising the world, unless you're really fond of numerous and often nonsensical inflexions.

I'm not sure I could recommend specific languages that may help in this regard, but I think I could recommend you to study linguistics instead of one specific language, and use that knowledge to help you decide in which one you want to invest your time. I've studied little of it, but the discipline seems full of instances where you put the spotlight, so to speak, on specific differences between languages and the way they affect cognition.

Comment author: drethelin 16 July 2015 10:14:48PM 1 point [-]

I would expect they have the correlation backwards. Smart people are more likely to find it easy and interesting to learn extra languages.

Comment author: hyporational 15 July 2015 09:19:21AM 5 points [-]

Quality observational research is probably very difficult to do since you can't properly control for indirect cognitive benefits you get from learning a second language and I'd take any results with a grain of salt. You also can't properly control for confounding factors e.g. reasons for learning a second language. I think you'd need experimental research with randomization to several languages and this would be very costly and possibly inethical to set up.

I have without a question gotten a huge boost from learning English since there aren't enough texts in my native language about psychology, cognitive science and medicine that happen to be my main interests. My native language also lacks the vocabulary to deal with those subjects efficiently. I have also learned several memory techniques and done cognitive tests and training solely because of being fluent in English.

Comment author: ChristianKl 15 July 2015 05:13:33PM 1 point [-]

I think you'd need experimental research with randomization to several languages and this would be very costly and possibly inethical to set up.

You just need to have an area where different schools have different curriculums and there a lottery mechanism for deciding which student goes to which school.

Comment author: hyporational 15 July 2015 05:30:16PM *  0 points [-]

That deals with the costs but I doubt consent would be easy to obtain unless the schools are very uniform in quality/status and people don't have preferences about which languages to learn, hence the possible problem with ethics. Schools have preferences too, quality schools want quality students.

Comment author: ChristianKl 15 July 2015 06:14:58PM 0 points [-]

There are multiple ways you can solve the problem of who gets to go to the most desired school. You can do it via tuition fees and let money decide who goes to the best school. You can do tests to have the best students go to the best school. You can also do random assignments.

Neither of those are "better" from an ethical perspective.

Comment author: hyporational 15 July 2015 06:27:40PM *  0 points [-]

If you let money decide or do tests you lose the statistical benefits of randomization. I don't understand how you see no ethical problem in ignoring preferences or not matching best students with best schools, perhaps I misunderstand you.

Comment author: ChristianKl 15 July 2015 07:08:40PM -1 points [-]

If you let money decide or do tests you lose the statistical benefits of randomization.

Yes of course, you need the randomization.

or not matching best students with best schools

If you want an equal society that it's impotant that poor students also get good teachers.

Comment author: James_Miller 15 July 2015 12:26:57AM 3 points [-]

Iranian leaders regularly chant "Death to America" and yet the United States seems to be on course to letting Iran acquire atomic weapons even though we currently have the capacity to destroy Iran's military and industrial capacity at a tiny cost to ourselves.

Comment author: James_Miller 15 July 2015 07:58:54PM 7 points [-]

Someone seems to have downvoted nearly ever comment to my top post.

Comment author: Lumifer 15 July 2015 08:01:06PM 12 points [-]

I think someone disapproves of political discussions on LW and is willing to karma-hose all participants in such.

Comment author: Elo 15 July 2015 08:44:13PM 1 point [-]

I agree with them. this is very specific of a political discussion, not a political philosophy one. Don't like it taking place here

Comment author: Lumifer 15 July 2015 08:57:20PM 7 points [-]

There is a bit of a difference between disliking a particular discussion on a forum and mass-downvoting all participants.

Comment author: Elo 15 July 2015 10:46:26PM 1 point [-]

Sorry, let me clarify, I agree that this place is not for politics, but a simple downvote on the top post, and a post describing that would have been fine. no need to downvote all sub-posts.

Comment author: polymathwannabe 15 July 2015 03:17:52PM 0 points [-]

This deal doesn't give Iran a path to the bomb. The whole process is to be closely supervised. More importantly, Iran doesn't want the bomb. It would be suicidal for them to invite a hundredfold-larger U.S. arsenal.

Comment author: James_Miller 15 July 2015 03:33:22PM *  8 points [-]

From what I understand, if the U.S. suspects Iran of cheating we have to wait at least 24 days and get the approval of other nations before we can inspect anything. Closely supervised, NO. Once Iran has an atomic weapon and the ability to hit a U.S. allied city with it, Iran wins immunity from U.S. attacks, unless it strike us first.

Comment author: jacob_cannell 15 July 2015 10:30:23PM -2 points [-]

Wouldn't any early limited nuke capabilities of Iran be unlikely to get past our missile defense? From my understanding our current defense systems could not withstand say a full-scale russian assault, but they are fairly capable in defending against limited strikes from smaller powers.

Comment author: James_Miller 15 July 2015 11:06:17PM 7 points [-]

Not if they smuggle the bomb into the United States.

Comment author: drethelin 16 July 2015 10:16:43PM -2 points [-]

If you're already at the stage of smuggling nuclear bombs across oceans and national borders, then whether or not Iran has the technology to make them is almost entirely irrelevant. There are plenty of nukes unaccounted for from soviet stockpiles, North Korea would probably be happy to covertly sell someone nukes, and so on.

Comment author: James_Miller 17 July 2015 12:08:08AM 6 points [-]

I could probably smuggle a large box from the Middle East to the United States via the Mexican border. I'm not sure you are right about the unaccounted for Soviet nukes.

Comment author: drethelin 17 July 2015 01:19:43AM -1 points [-]
Comment author: Lumifer 15 July 2015 03:21:06PM 7 points [-]

More importantly, Iran doesn't want the bomb.

How do you know?

Comment author: polymathwannabe 15 July 2015 04:22:54PM -1 points [-]
Comment author: Lumifer 15 July 2015 04:37:09PM *  8 points [-]

I am not impressed by the opinion of this guy, mostly because he states obviously false things as if they were facts. Notably:

  • "A handful of bombs doesn’t help as long as Iran is surrounded by bombs". That is not true at all, a nuclear weapon is a highly useful deterrent, especially against conventional attacks. Ask Kim Jong-un about it.

  • "Iran would cease to exist only twenty minutes after having carried out a nuclear attack on Israel". Is there any evidence that the US stands ready to launch a nuclear attack (in 20 minutes!) against a country that would drop a nuke on Israel? Not to mention that the way Iran is likely to nuke Israel is via their Hezbollah proxy.

The whole strawman premise there seems to be that Iran wants to do some kind of nuclear-brinkmanship new Cold War with the US. This is utter nonsense, of course. Iran does want nuclear weapons, but not for launching at the US.

Comment author: knb 16 July 2015 01:18:27AM -1 points [-]

Iran would cease to exist only twenty minutes after having carried out a nuclear attack on Israel". Is there any evidence that the US stands ready to launch a nuclear attack (in 20 minutes!) against a country that would drop a nuke on Israel? Not to mention that the way Iran is likely to nuke Israel is via their Hezbollah proxy.

Israel has its own very sophisticated nuclear arsenal. US participation would not be needed.

Comment author: Vaniver 15 July 2015 05:20:32PM *  -1 points [-]

That is not true at all, a nuclear weapon is a highly useful deterrent, especially against conventional attacks. Ask Kim Jong-un about it.

I was under the impression that the true deterrent there was hardened and decentralized conventional artillery able to do significant damage to Seoul, since we're pretty sure North Korean nukes will work as well as their cure for MERS, Ebola, and AIDS.

Comment author: Lumifer 15 July 2015 05:23:05PM *  7 points [-]

Ideally you want multiple deterrents, of course.

As to the chances of the nuke working, well, you gotta ask yourself, do you feel lucky, punk? X-/

Edited to add: We are discussing here whether Iran wants nukes. Therefore what is relevant is that the Kims wanted nukes, even though they had the artillery-can-reach-Seoul deterrent already.

Comment author: ChristianKl 15 July 2015 04:51:55PM 1 point [-]

"Iran would cease to exist only twenty minutes after having carried out a nuclear attack on Israel". Is there any evidence that the US stands ready to launch a nuclear attack (in 20 minutes!) against a country that would drop a nuke on Israel?

Whether or not the US is willing to launch nukes, Israel has submaries that carry nuclear weapons and that likely would retaliate with them in case Israel get's nuked.

Comment author: Lumifer 15 July 2015 05:02:26PM *  7 points [-]

Israel has submaries that carry nuclear weapons

Not "has", but "is in the process of acquiring". I suspect that has much to do with the nuclear weapons that Iran does not want and is not building X-/

Besides, the easiest way to nuke Israel looks like this: a rusty freighter under the Panamian flag arrives into Tel Aviv. One minute after it docks, Tel Aviv is a radioactive crater. That's all the information you have -- what next, do you order a nuclear launch on Tehran? On which basis?

And, of course, a few nukes will not make a large country like Iran "cease to exist". Look at Japan.

Comment author: James_Miller 15 July 2015 05:37:29PM 7 points [-]

Yes,and think what happens to economic investment in Tel Aviv if people in a nuclear-armed Iran hint that they might do this.

Comment author: ChristianKl 15 July 2015 05:15:59PM 0 points [-]

Not "has", but "in the process of acquiring". I suspect that has much to do with the nuclear weapons that Iran does not want and is not building X-/

Israel has at least 3 submaries capable of carrying nuclear weapons: http://www.spiegel.de/international/world/israel-deploys-nuclear-weapons-on-german-submarines-a-836671.html

One minute after it docks, Tel Aviv is a radioactive crater. That's all the information you have -- what next, do you order a nuclear launch on Tehran? On which basis?

I would guess that Israel has protocols for direct nuclear answers.

Comment author: Lumifer 15 July 2015 05:19:28PM *  6 points [-]

Israel has at least 3 submaries capable of carrying nuclear weapons

There are the old Dolphins and the new Dolphins, they are very different. It is the new Dolphins which are supposed to have the second-strike nuclear capability and Israel just got the first one in the series. See e.g. here.

Israel has protocols for direct nuclear answers

I am sure it has. But the situation when you tracked a long-range bomber from Iranian airspace and that bomber dropped a nuke is very different from the situation when a nuke just exploded in a city and you have no idea how that happened or who is responsible.

Comment author: James_Miller 15 July 2015 05:39:38PM 8 points [-]

Especially if Iran announces that should we be hit in retaliation, we will use all of our (remaining) nuclear weapons.

Comment author: Douglas_Knight 15 July 2015 05:37:33PM *  0 points [-]

NTI cites a 1999 Jane's report saying that the old Dolphins carried nuclear missiles. (And the 1999 ship may well have been specified in 1989.)

Comment author: ChristianKl 15 July 2015 10:40:30AM 5 points [-]

we currently have the capacity to destroy Iran's military and industrial capacity at a tiny cost to ourselves.

I think you underrate the cost of destroying Iran's industrial capacity. It costs more than just the bombs. It likely will result in Russia deploying more troops in Ukraine and issues in a variety of other conflicts.

Comment author: polymathwannabe 15 July 2015 04:54:06PM 0 points [-]

As if Putin needed help finding an excuse to meddle in Ukraine.

Comment author: James_Miller 15 July 2015 01:51:40PM 8 points [-]

I think it cuts the other way, and we will have more additional conflicts if the United States allows Iran to acquire atomic weapons. I don't see how it will be in Russia's self-interest to put more troops in Ukraine if the U.S. attacks Iran.

Comment author: ChristianKl 15 July 2015 04:51:23PM -1 points [-]

Moral capital has value. It would create a situation in which European powers are a lot less likely to do anything about Ukraine.

Bombing in a way that targeted to do industrial damage might even tipp the scales in a way that the value of US military bases on EU soil get's more questionable.

Comment author: James_Miller 15 July 2015 05:10:27PM 7 points [-]

It would create a situation in which European powers are a lot less likely to do anything about Ukraine.

Europe acts out of self-interest in opposing Russian actions in Ukraine. Europe will be less likely to act if they perceive the U.S. being unwilling to use force against its enemies because it makes us a less reliable friend. I see the current deal as a U.S. betrayal of Israel and think other U.S. allies will interpret it likewise. The Baltic states will figure that if the U.S. isn't willing to stand up to Iran, it certainly won't protect them from Russia so they will be far less likely to anger Russia. Please keep in mind how Sweden reacted when Hitler requested access to Swedish territory to help with his invasion of Norway.

Comment author: ChristianKl 15 July 2015 06:18:15PM 0 points [-]

Europe will be less likely to act if they perceive the U.S. being unwilling to use force against its enemies because it makes us a less reliable friend.

I think you are very wrong if you think that unilateral usage of force against international law (which a specific attack targeted on destroying industry clearly is) will make the US seem reliable to European nations.

I see the current deal as a U.S. betrayal of Israel and think other U.S. allies will interpret it likewise.

Israel prefers to have a weak Iran with little influence in other states in the middle East. Sanction weaken Iran regardles of the subject of nuclear missles.

Given Sunni ISIS there are advantages of a stronger Shia Iran.

There are no treaty obligations at all in which the US promised to attack Iran for Israel. I don't see how it could be betrayal.

Please keep in mind how Sweden reacted when Hitler requested access to Swedish territory to help with his invasion of Norway.

You mean like the US is also wanting to request to use Swedish territory to have military bases (Sweden currently not being a NATO country)? In Germany US military bases currently enaging in economic spying. The NSA even spied on the German ministry of agriculture.

I think you make a mistake of modeling countries as single actors when politics is much more complicated and there are a lot of forces within countries pushing against each other.

Comment author: MrMind 15 July 2015 09:24:39AM 2 points [-]

we currently have the capacity to destroy Iran's military and industrial capacity at a tiny cost to ourselves.

I think you're underestimating Iran's defences.
At the present time, with Natanz's plant fully bunkered, there's no way to disable it and the couple of other support plants with a surgical attack. If you want to disable Iran's nuclear capacity (not even considering its military or industrial facilities) you need to go heavy tactical or nuclear, which will mean full scale war (ugliness ensues).

Besides, international sanctions were much more effective at destroying Iran's economy, which is the only reason why they accepted the terms under the present treaty.

Comment author: James_Miller 15 July 2015 01:55:46PM *  7 points [-]

The current deal will lift international sanctions. The Massive Ordnance Penetrator bomb might be able to destroy any of Iran's nuclear plants.

Comment author: MrMind 17 July 2015 08:11:32AM -1 points [-]

All that you say is true. My point was that it won't be a tiny cost: the use of heavy weapon (like the one you indicate) doesn't allow plausible deniability, it will mean a full scale war with Iran, and that could very well tip a third global war.

Comment author: James_Miller 17 July 2015 12:36:04PM 6 points [-]

and that could very well tip a third global war.

I don't see how since Iran has almost no friends and lacks the logistical capacity to attack forces far away.

Comment author: tim 15 July 2015 02:36:59AM 8 points [-]

Are you confused as to why politicians would repeat a phrase that reliably energizes their political base even though it may not represent reality completely accurately?

Comment author: Lumifer 15 July 2015 02:41:35PM 10 points [-]

I think the issue is how seriously do you want to take that phrase.

For example, a few years ago when Putin was talking about gathering all the Russians under the protective wings of Mother Russia, most people interpreted this as a "phrase that reliably energizes [his] political base". And then Ukraine happened.

Comment author: Viliam 17 July 2015 11:54:51AM 2 points [-]

If certain phrases "energize" the voters, it seems likely that they will vote for the politician who promises to do it. And if the politician wants to be elected repeatedly, sooner or later he must start doing something that at least resembles the promise.

Comment author: VoiceOfRa 21 July 2015 02:57:16AM 6 points [-]

Or if the politician isn't willing to do it, he'll get replaced by someone who is.

Comment author: Lumifer 17 July 2015 02:41:39PM 6 points [-]

A counter-example: the recent Greek referendum X-/

But yes, you make a fair point and so raise an interesting question -- what would be that "something that at least resembles the promise" with respect to the "Death to America" chants?

Comment author: James_Miller 15 July 2015 03:55:51AM *  9 points [-]

In general, no. But I take the chant as evidence that lots of people in Iran would be happy if an atomic bomb went off in New York City. If someone says he wants to kill me, I raise my estimate of the likelihood of him wanting to kill me. If he says it over and over again to his cheering friends, I fear him and want him to be weak even if in the past I have given him justifiable cause for offense. I become really, really scared and desperate if I think he would be willing to kill me even at the cost of giving up his own life. I wish my president shared this view.

Comment author: knb 15 July 2015 02:11:34AM *  3 points [-]

Iranians chant "death to America" because of America's past abuses, such as overthrowing the democratic government of Mohammad Mosaddegh to install the dictatorship of the Shah of Iran and supporting Saddam Hussein's bloody war of aggression against Iran (hundreds of thousands of Iranians died.) This included direct support for Saddam Hussein's chemical and biological weapons programs. It's ridiculous to frame this as Iranian "mad dogs" vs. innocent Americans. They have every reason to fear foreign aggression. For example, this and this.

Attacking Iran again would simply be continuing the pattern of violent aggression the US has established in the Middle East for decades.

Comment author: VoiceOfRa 21 July 2015 03:35:37AM 1 point [-]

Attacking Iran again would simply be continuing the pattern of violent aggression the US has established in the Middle East for decades.

And letting Iran have nukes would lead to the Middle East becoming a peaceful place.

Comment author: jacob_cannell 15 July 2015 10:23:31PM *  2 points [-]

Both all of your statements and those of James_Miller can be true without contradicting each other.

Regardless of how modern Iran came to be or who is to blame, you seem to agree that the Iranian public is quite hostile to the U.S.

I don't worry about this too much, because I assume that the CIA/DOD/whoever have determined that we can live with a nuke powered Iran, even if they hate us.

Comment author: [deleted] 15 July 2015 03:08:41PM *  12 points [-]

This is a bit of a suspicious summary to me, because it sounds exactly like the summary from the angle of a highly educated, perhaps pol sci grad left-leaning highly critical American. Is it really likely that average guy in Iran really has the same perspective? Or their leaders? You simply don't seem to be making any effort to simulate their minds.

To give you one example of the lack of simulation here: too long memory. Mossadegh, really? 1953? That is what some guy born in 1970 or 80 will riot about? You have to be half a historian and full of a high-brown person to care what happened in 1953. For comparison, for most people who shot Kennedy and why is ancient history and that was 10 years later, in a country with far better collective memory than Iran (more books published, more media made etc.) If it turns out today the Russkies did it somehow, how many Americans will get angry? My prediction: not many.

Comment author: Sarunas 16 July 2015 11:21:10AM 2 points [-]

There is a difference between one-off events and events that fall into a certain pattern and narrative. The latter are often remembered as being an example of events that fall into that narrative. In my impression Kennedy's assassination, despite all conspiracy theories surrounding it, is rarely thought of as being a part of a bigger narrative.

Comment author: knb 15 July 2015 11:23:49PM *  3 points [-]

This is a bit of a suspicious summary to me, because it sounds exactly like the summary from the angle of a highly educated, perhaps pol sci grad left-leaning highly critical American.

I'm actually more of a conservative than liberal but I think anyone acquainted with the facts and making a good-faith effort not to see Iranians as Evil Mutants should come to the same conclusions. The US media essentially never mentions these facts and even when they do they treat each as an isolated incident rather than part of a consistent pattern which explains the attitude many Iranians have toward the US. I learned these things from being active in the US antiwar movement for the last 10 years or so.

To give you one example of the lack of simulation here: too long memory. Mossadegh, really? 1953? That is what some guy born in 1970 or 80 will riot about?

First of all they aren't rioting; they're protesting. It would be one thing if the US had acknowledged the wrongness of this action and apologized for it. To the best of my knowledge this has never happened. And don't forget that the Shah was imposed by the US and reigned until 1979! That isn't exactly ancient history. There are many people presently alive who fully remember the Iran-Iraq war and the Shah's dictatorship.

If it turns out today the Russkies did it somehow, how many Americans will get angry? My prediction: not many.

That's very different. The government wasn't replaced when JFK died; his vice president (who largely continued his policies) was made president. Very little changed for most Americans. Furthermore the Soviet Union no longer exists, whereas the US government continues to behave in a very similar, heavy handed way in the Middle East as it did in the 1950s. The difference is instead of dictatorships, the US tends to create anarchy and long-term civil war.

Comment author: Lumifer 15 July 2015 11:59:43PM 6 points [-]

but I think anyone acquainted with the facts and making a good-faith effort not to see Iranians as Evil Mutants should come to the same conclusions.

Here is a counter-example for you. I am well acquained with the facts and I do not see Iranians as Evil Mutants (well, not any more than I see Americans as such :-P). I do not come to the same conclusions as you, obviously.

Comment author: Sarunas 16 July 2015 10:59:16AM *  0 points [-]

What conclusions have you arrived at? Do you think some statements mentioned are incorrect or do you think that something else (e.g. role of Shah Mohammad Reza Pahlavi himself and other people within Iran itself, or ideology of Iranian Revolution and role of people like Ali Shariati, or role of contemporary events in neighbouring countries or something else entirely) should be more emphasized?

Comment author: Lumifer 16 July 2015 02:45:58PM 5 points [-]

What exactly is the question here?

In the comments above I was mostly pushing against the leftist view of geopolitics which sets up the US as Evil Mutants intent on oppressing the rest of the world (in the Middle East together with their lapdog / puppet Israel), while anyone opposed to the US is a victim with legitimate grievances and if they have the "Death to America" attitude it is justified.

Comment author: ChristianKl 15 July 2015 05:58:38PM 2 points [-]

For comparison, for most people who shot Kennedy and why is ancient history and that was 10 years later, in a country with far better collective memory than Iran (more books published, more media made etc.)

More media doesn't mean better collective memory. Iranian children are taught their history in school.

Western culture focuses more on the short term, than more traditional cultures do.

Comment author: polymathwannabe 15 July 2015 04:43:02PM 1 point [-]

A nation's memory is limited, and too many things have happened in the U.S. since Kennedy's death. Bolivia is still sore from losing its coast to Chile in 1884, because not much has happened to Bolivians afterwards.

Comment author: Lumifer 15 July 2015 04:53:31PM 5 points [-]

A nation's memory is limited, and too many things have happened in the U.S. since Kennedy's death.

Are you really arguing that not that much happened in Iran since 1953??

Comment author: polymathwannabe 15 July 2015 05:04:49PM 1 point [-]

Much indeed, but instead of being varied and fleeting, the events that followed were directly related to 1953 and served to reinforce that memory. The fact that the U.S. has steadily kept ruining the lives of Iran's neighbors doesn't help, either.

Comment author: Lumifer 15 July 2015 05:39:44PM *  6 points [-]

the events that followed were directly related to 1953

So, the Islamic Revolution was directly related to 1953? As was the Iraq-Iran war?

the U.S. has steadily kept ruining the lives of Iran's neighbors

Let's look at Iran's neighbors. There's Saudi Arabia and the Gulf States, which all are doing just fine. There's Turkey, which is just fine as well. There are some former Russian republics which are a mess, but for that you have to talk to Mr.Putin. There is Afghanistan which has been a mess since the Russian invasion (or, arguably, since the British Empire's Great Game) and while the US has certainly been involved, I don't think you can blame it for Afghanistan being what it is. There's Pakistan which is not the best of countries but is still managing to muddle through and even acquire nuclear weapons in the process.

So I guess all you mean is Iraq. Same Iraq which you agreed was supported by the US in "the bloody war of aggression against Iran"? But yes, you have a valid point in that the Second Iraq war was started on the pretext of preventing Iraq from developing weapons of mass destruction. Iran certainly took notice and, I suspect, came to the conclusion that a deterrent against a conventional US invasion would be a very useful thing to have.

I think you just undermined your own argument that Iran doesn't want nukes :-)

Comment author: polymathwannabe 15 July 2015 06:58:32PM -1 points [-]

So, the Islamic Revolution was directly related to 1953? As was the Iraq-Iran war?

Yes, the whole point of the revolution was to remove the U.S.-appointed monarch and reverse the pro-Western trend he had started. And then Iraq invaded Iran because it was afraid the revolution would spread.

Just one year after the revolution, Jimmy Carter proclaimed that the Persian Gulf was the U.S.'s personal playground, and no one (else) was allowed to mess with it. Bush I and Bush II acted accordingly. Even the continued goodwill toward Saudi Arabia is a cause of worry for Iran, as they're sectarian rivals. And then there's Israel, which is viewed as a representative of U.S. interests against Muslim populations.

The Second Iraq war was started on the pretext that Iraq already had WMDs. For Iran, having them isn't going to stop a U.S. invasion.

Comment author: Douglas_Knight 17 July 2015 04:40:47AM *  0 points [-]

That the revolution was to remove the American influence seems to me much weaker, and thus easier to prove, than the claim that it was directly related to 1953.

Comment author: Lumifer 15 July 2015 07:20:41PM 1 point [-]

Sigh. OK, we live in different universes. I wish you luck in yours.

Comment author: knb 16 July 2015 12:04:32AM -1 points [-]

You really are in your own delusional universe if you think the revolution had nothing to do with removing the foreign-imposed dictator.

Comment author: James_Miller 15 July 2015 07:54:27PM 5 points [-]

He needs less luck than you since his contains the President of the United States and most of academia.

Comment author: Lumifer 15 July 2015 03:20:40PM 6 points [-]

You have to be half a historian and full of a high-brown person to care

That's an awesome typo :-D

Comment author: polymathwannabe 15 July 2015 02:47:53PM 1 point [-]

Upvoted for happening to be true.

Comment author: Lumifer 15 July 2015 02:57:50PM 1 point [-]

LOL. I'm not going to play "burn out the heresy with my karma flamethrower", but you might want to step back from the tribal fight and think about what "true" actually means in this context.

Comment author: polymathwannabe 15 July 2015 04:34:36PM 0 points [-]

Note: that downvote is not mine.

Comment author: Lumifer 15 July 2015 02:39:00PM 2 points [-]

Downvoted for mindlessly regurgitating a pile of propaganda onto LW.

Comment author: James_Miller 15 July 2015 02:38:36AM 8 points [-]

I didn't mean to frame this as " Iranian "mad dogs" vs. innocent Americans." Rather, for reasons another nation hates my nation, and my nation seems willing to let this other nation acquire atomic weapons.

I remember some U.S. general (I think) saying that the great tragedy of the Iran/Iraq war was that someday it will end.

Comment author: Thomas 14 July 2015 04:49:17PM 3 points [-]

It has been reported, that a 5 quarks particle has been produced/spotted in LHC CERN.

http://www.bbc.com/news/science-environment-33517492

I am very happy, that this apparently isn't a strange matter particle.

https://en.wikipedia.org/wiki/Strange_matter

At least not of a dangerous kind. For now, at least.

So, I hope it will continue, without a major malfunction on the global (cosmic) scale.

Comment author: Baughn 14 July 2015 08:49:02PM 1 point [-]

Nothing terrible was going to happen. As has been pointed out, collisions that energetic or more happen all the time in the upper atmosphere.

Comment author: Thomas 14 July 2015 10:04:42PM 0 points [-]

Energetic perhaps. But as dense also?

Comment author: Manfred 15 July 2015 01:29:08AM *  0 points [-]

These things are only about 4 GeV (4 times heavier than a proton, much lighter than the Higgs boson, much smaller than the energies in the LHC, an extremely easy energy for cosmic rays to reach). Neither energy nor density are keeping us safe if these things are dangerous - the LHC just detected them by making lots of them and having really good sensitivity.

Comment author: VocalComedy 14 July 2015 01:26:53AM *  1 point [-]

17/7 - Update: Thank you to everyone for their assistance. Here is a re-worked version of Father. It is unlisted, for testing purposes. If one happens to comes across this post, please consider giving feedback regarding how long it captures your attention.

In the interests of privacy, please excuse the specialised account and lack of identifying personal information.

A bit of background: recently created a YouTube channel for the dual purposes of creating an online repository of works that can easily be hyperlinked, and establishing an alternative source of income. The channel is intended to be humorous, though neither speciously nor vituperatively so. One aim of posting this here is to see whether the humour is agreeable to elements of the LW community.

Another is to ask for advice. After a few days utilising Google's AdWords to generate views on one of the videos, of the 600 views received, not a single one engaged with the video beyond merely watching it. All the low-hanging fruit - enticing the viewer to engage by liking, subscribing, etc. has been plucked. One question is whether these requests for engagement are too subtle; perhaps erring on the side of not trying to annoy viewers has led to missed opportunities? The prospect for channel growth seems bleak in light of the above statistic.

Social media marketing, in the form of reddit, Twitter, and Pinterest have not yielded any subscribers. Word of mouth has yielded positive feedback, but no engagement outside of personal acquaintances. If the advice received here does not help, the next step is to create an account on a YouTube specific forum asking for assistance.

Are there obvious avenues for marketing being overlooked, here? Is there an obvious demographic or audience that would most enjoy these videos? Outside perspective is needed, and the dearth of feedback from strangers - both positive and negative - does not offer much indication of how to do things differently. Thank you for your time.

Comment author: NancyLebovitz 14 July 2015 11:02:52AM 2 points [-]

I listened to about three minutes of the one about the narrator's father. The humor wasn't to my taste-- a sort of silliness that just didn't work.

I see you were trying not to be annoying, but I wasn't crazy about the unclear context (was this a video game, a dream, or what?), the weird voices, and the narrator's fear of his father.. My tentative suggestion is that you go for being as annoying as you feel like being, and see whether you can attract an audience who isn't me.

Comment author: VocalComedy 14 July 2015 08:55:33PM 0 points [-]

Thank you for listening. There wasn't really any context beyond 'son returns to Father's mansion', and the matrimonial surprise revealed during his speech.

Would perhaps a static image in the background with text stating the above have helped?

Comment author: NancyLebovitz 14 July 2015 10:29:49PM 0 points [-]

You're welcome.

An image wouldn't have helped-- my problem was with the monologue.

Comment author: chaosmage 14 July 2015 11:00:14AM *  4 points [-]

You're giving me no relatable subject I could be interested in, nothing pretty to look at and no music. Literally the only hint that lets me expect anything good from this channel is the word "Comedy" in the title. And when you fail to give me a good joke in the first 5 seconds, my expectation for funniness from the rest of the video goes way down. This means no expectation to be entertained is left, so I leave.

Your voice is good though, and the sound quality is fine.

Minor points: You talk too slowly, except in your first video. Your channel banner is repulsive. The visualizations you use are both ugly and getting worse; the newest one is downright painful to look at. (Seriously, an unmoving image would do less harm.)

If you show your face and drop a quick one-liner right at the beginning and talk a bit faster, this might be going places, otherwise I don't think you have a chance to be talked about for this, let alone make money.

Comment author: VocalComedy 14 July 2015 09:39:23PM *  0 points [-]

EDIT: Here's an example video incorporating a few of the ideas you suggested.

Pretty things: A fairly static visualisation, basically a four pointed blue star that very slowly rotates, could be used as a standard replacement for every video. Would you suggest that, a similar option, or one of the following: an image of nature that may not fit the theme of the video, crudely drawn images of one thing that do not change, crudely drawn images of characters that change infrequently if at all?

Music: Do you suggest inserting background music into the audio files? If so, should the music be opposite the tone of the file (e.g. happy-go-lucky music to the Documentary), or match the tone?

Thank you.

What video do you mean by, 'first'? Father, or Donerly?

Banner: Is this better? Or is the font the main issue? If the latter, what attribute would you recommend in a better font - more rounded letters, blockier letters, more Gothic letters, more elongated letters?

One-liner: This sounds a very good idea. Will it work without showing a face?

Relatable subjects: See the comment to Christian for descriptions of the audio files. Would including those descriptions in the in static image, and/or the description box below, keep you listening?

Apologies for the onslaught of questions; you are in no way obligated to answer any of them, and thank you for the above feedback.

Comment author: chaosmage 15 July 2015 10:49:11AM *  -1 points [-]

This new example video is much better. If I wasn't invested in watching it in order to assist you, I would have clicked away from it after about 45 seconds rather than 5, and then mostly because of your pausing speech. (Many YouTube creators cut out every single inbreath, and I suggest you try that.) The music made a surprising amount of positive difference, and I actually like the picture a bit - I hope you have rights to use both?

Of the visualization options you name, I figure a nature image, possibly with a textual description, is the least bad option. But really, not showing your face cuts down your appeal by at least 90%. As long as you don't do that, your problem isn't in the marketing, it's in the product.

I'm not suggesting background music, although it evidently helps. I'm saying that when I watch videos, expecting to hear enjoyable music is frequently my main motivation. And since almost all of the most-viewed videos are music videos, that's obviously a common motivation. Your video is not adressing that motivation, and background music is unlikely to change that. Nor is it adressing the common motivations for personal connection, interesting or actionable information, or something pretty to look at. You could get at the personal connection bit if you made jokes about (what you claim to be) true stories from your personal life and - did I say that already? - show your face.

To me, your banner looks simply cheap. It signals you're not committed to making me have a good time. Yes the clouds help a bit, but I'm sure you could do much better.

A one-liner (or better yet, three good jokes in the first 20 seconds to build up expected entertainment value for the rest of the video, and keep me watching) will help even without a face. A face would help more. Compare this: https://www.youtube.com/watch?v=FHczVzGfyqQ . The guy isn't conventionally pretty, and the video is clearly not about visuals, but still, he wouldn't have gotten over 300K views with a goddamn static visualisation.

And yes, people will make stupid hurtful comments about your face, even if you're the sexiest person on the planet. Growing to tolerate that is one of the best reasons to make videos.

Descriptions will occasionally make me invest a couple more seconds in a video, i.e. make me give it a couple more opportunities to get me hooked.

Comment author: VocalComedy 15 July 2015 03:48:22PM *  0 points [-]

Edit: Here's Father with an animated face and a one-liner in the beginning. Thoughts?

Can't find rights information for the image, and the music is royalty-free. Will endeavour to minimise the pauses in the future. How much of the difference was due to content, would you say?

If that is the least bad option, then barring showing a face, what would you say is an actually good option?

Face: Attractiveness and confidence are non-issues, but still can't show a face. The true objection is for reasons of privacy; one of those reasons is a negative impact upon professional life. On the plus side, upon achieving a sizeable audience, that reason no longer applies. At that point, a face may be able to be shown.

Here's the only other channel with similar content that does not show a face. They keep viewers engaged with animated subtitles that take a month to produce. If you watch Father with subtitles on, is your interest held better?

Will make a new banner. Was going for a homey, casual vibe; still want that vibe, but will make it look more produced.

How about this as a slate / one-liner example?

Comment author: Cariyaga 15 July 2015 07:04:46PM 1 point [-]

Something you could do, alternatively, is use software like facerig, assuming you have a webcam. It would work fairly effectively, I think, and is comedic enough in its own right to go along with your show.

Comment author: VocalComedy 17 July 2015 06:50:26PM 0 points [-]

Here's a test using Facerig. What do you think?

Comment author: VocalComedy 15 July 2015 11:44:35PM 0 points [-]

That is excellent, thank you. Do you think a mobile PC with an Intel® Core™ 2 Duo @ 2.8GHz and an ATI Mobility Radeon 4650 can handle the minimum specs of Intel® Core™ i3-3220 or equivalent and NVIDIA GeForce GT220 or equivalent?

Comment author: Cariyaga 16 July 2015 03:09:28AM 1 point [-]

I've no clue myself. My minimal expertise in computer specs is 5 years old; the last time I payed attention to them was when I built my current computer (and even then with parts recommended by a friend). However, I've long since delegated figuring out if my computer can run something to Can You Run It. It functions fairly effectively in checking that sort of thing.

Comment author: VocalComedy 16 July 2015 04:01:59PM 0 points [-]

Ah, many thanks. Breaks down the relevant performance components of the graphics card; worth the attempt, at the very least.

Comment author: 4hodmt 14 July 2015 09:21:13AM 1 point [-]

My 5 second judgement, which is about as much attention as a totally unknown channel can expect to get, is that these videos are stand-up comedy by somebody without the confidence to perform live in front of an audience. This immediately signals that it's not worth my time.

Comment author: VocalComedy 14 July 2015 08:57:08PM 0 points [-]

Which video did you watch? And do you know how could that impression be averted, at least from a personal perspective? Thank you for the feedback.

Comment author: Viliam 14 July 2015 09:01:19AM *  1 point [-]

Eh, it's not my kind of humor. I found all those videos totally unfunny, so I just clicked on them, listened for 5 seconds, and closed the page. So the first question is whether my reaction is typical or not. Can you measure how many of those people who clicked on video watched it till the end? Because only those are your audience. And if they are your personal acquaintances, there is still a risk they wouldn't watch the whole video otherwise.

I believe there is a niche for any kind of product, but the question is how to find it. Perhaps you could find similar videos and see how they do it.

Comment author: VocalComedy 14 July 2015 09:03:37PM 0 points [-]

Your reaction is typical. There is an 18% view rate for 75% of the 'Documentary'; only 8% watch the whole thing. Even those that watched the whole video did not engage with the channel, or watch other videos. Thank you for the feedback!

The only similar channel is OwnagePranks, which has images of characters, and animated subtitles. The latter is infeasible, while the former is a promising indication of a needed change.

Comment author: ChristianKl 14 July 2015 07:21:54AM *  0 points [-]

You fail to say what the videos are about. That's bad for any venue that you want to market.

Comment author: VocalComedy 14 July 2015 09:20:17PM 0 points [-]

The two longer videos somewhat rely on the unexpected for their laughs; working around that, here are descriptions of each video. Do you think the descriptions would help engage viewers?

Father: A son, apart from his father for many years, returns home to his father's mansion to restore the intimacy of their relationship. As context, imagine you told your father to listen to this for Father's Day, for this was their present.

Documentary: A satire of serious public radio news stations: the modern expectations parents have of their children is taken to a logical and absurd extreme.

Donerly: A parody of the character and substance of reality television programming. Donerly is a vulgar figure, prone to foul language - be advised.

Silly Things: Mini-parodies of the common types of voice overs. These are, in order: sales; promotions; quickly relating terms of service; avant-garde marketing; IVR; two normal people like you having a conversation; and a jingle that isn't selling what you were expecting.

Comment author: ChristianKl 14 July 2015 11:04:04PM 1 point [-]

You don't articulate a purpose.

If your goal is to make money, starting a comedy youtube channel doesn't seem to be the obvious choice. There's lot's of competition and little money.

Comment author: VocalComedy 20 June 2016 06:46:36AM 0 points [-]

After giving more thought to this: Have you other suggestions that immediately come to mind aside from professional voice acting?

Comment author: ChristianKl 20 June 2016 09:54:45AM 0 points [-]

There are many job in this world. I don't know enough about you to know which one would be the best to earn money.

Comment author: VocalComedy 14 July 2015 11:32:21PM 0 points [-]

Making money would be amazing, but is not the primary goal. These files will be made regardless of whether there is a YouTube channel hosting them, and YouTube seems the ideal platform with which to achieve the secondary goal of monetising the files.

The bare minimum purpose is to have work that can be hyperlinked. That bare minimum has already been met. However, seeing a video with very few views, or many views and few likes, does not signal positive things. It would be wonderful to be able to hyperlink these files in contexts where sending a positive signal is a necessity.

Spare time is being spent to market and try to monetise the files; ideally, this effort will result in a moderately sized audience that likes the files. These are the goals of the project. If you have more promising ideas, please share them.

Comment author: ChristianKl 15 July 2015 08:34:18AM 0 points [-]

These files will be made regardless of whether there is a YouTube channel hosting them

Why? What your purpose for creating them?

Comment author: VocalComedy 15 July 2015 01:46:52PM 0 points [-]

Fun.

Comment author: [deleted] 13 July 2015 11:46:50PM 1 point [-]

What are your thoughts on this AI failure mode: Assume an AI works by rewarding itself when it improves its model of the world (which is roughly Schmidhuber’s curiosity-driven reinforcement learning approach to AI), however, the AI figures out that it can also receive reward if it turns this sort of learning on its head: Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.

Has this been considered before? Can we see this occurring in natural intelligence?

Comment author: Vaniver 14 July 2015 04:22:06PM 1 point [-]

Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.

One might call this 'cleaning' or 'homogenizing' the world; instead of trying to get better at predicting the variation, you try to reduce the variation so that prediction is easier.

I don't think I've seen much mathematical work on this, and very little that discusses it as an AI failure mode. Most of the discussions I see of it as a failure mode have to do with markets, globalization, agriculture, and pandemic risk.

Comment author: shminux 14 July 2015 12:35:58AM 1 point [-]

Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.

Isn't it basically the definition of agency? Steering the world state toward the one you want?

Comment author: Viliam 14 July 2015 09:06:33AM 1 point [-]

The problem is that in this specific case "the world state you want" is more or less defined as something that is easy to model (because you are rewarded when your models for the world), which may give you incentives to destroy exceptional complicated things... such as life.

Comment author: [deleted] 14 July 2015 01:19:02AM 1 point [-]

It would be a form of agency but probably not the definition of it. In the curiosity-driven approach the agent is thought to choose actions such that it can gain reward form learning new things about the world, thereby compressing the knowledge about the world more (possibly overlooking that the reward could also be gained from making the world better fit the current model of it).

The best illustrating example I can think of right now is an AI that falsely assumes that the Earth is spherical and it decides to flatten the equator instead of updating its model.

Comment author: Viliam 13 July 2015 10:29:02PM *  13 points [-]

One-Minute Time Machine -- a short romantic movie that LW readers might like.

Comment author: shminux 13 July 2015 11:36:33PM 6 points [-]

Excellent! I don't share the guy's qualms, though. The girl I can empathize with. Oh, and hopefully Eitan_Zohar doesn't come across it.

Comment author: philh 17 July 2015 10:54:20AM 0 points [-]

I feel sorry for the girls and boys who suddenly have a corpse on their hands.