Open Thread, Jul. 13 - Jul. 19, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (297)
Some users might find this interesting: I've finished up 3 years of scraping/downloading all the Tor-Bitcoin darknet markets and have released it all as a 50GB compressed archive (~1.5tb). See http://www.gwern.net/Black-market%20archives
Thank you.
One-Minute Time Machine -- a short romantic movie that LW readers might like.
Excellent! I don't share the guy's qualms, though. The girl I can empathize with. Oh, and hopefully Eitan_Zohar doesn't come across it.
I feel sorry for the girls and boys who suddenly have a corpse on their hands.
I found this paper: Adults Can Be Trained to Acquire Synesthetic Experiences.
The goal of the study was to see if they could induce synesthesia artificially by forcing people to associate letters with colors. But the interesting part is that after 9 weeks of training, the participants gained 12 IQ points. I have read that increasing IQ is really difficult, and effect sizes this large are unheard of. So I found this really surprising, especially since it doesn't seem to have gotten a lot of attention.
EDIT: This is a Cattell Culture Fair IQ which uses 24 points as a standard deviation instead of 15. So it's more like 7.5 IQ points.
They made each participant do 30 minutes of training every day of 9 weeks, which involved a few different tasks to try to form associations between colors and letters. They also assigned colored reading material to read at home.
They took IQ tests before and after and gained 12 IQ points after the training. A control group also took the tests before and after but did not receive training, and did not improve. The sample sizes are small, but the effect sizes might be large enough to justify it. They give a p value of 0.008.
In the paper there are some quotes from subjects, and they describe thinking about words visually. E.g. ‘‘I see the colors like on a monitor in my head and its very automatic’’ or ‘‘The color immediately pops into my head… When I look at a sign the whole word appears colored according to the training colors… it is just as automatic for single letters’’.
I speculate that this might be the cause of the effect, something about using more of the visual system when thinking. That's just weak speculation though.
I tried to do some more research to see if there was any correlation between synesthesia and IQ. I did not expect there to be, but perhaps it does correlate. This paper suggests it might:
The data from this study shows 10 synesthetes had the same average IQ scores as the controls (but greater standard deviation if that means anything.)
Same story with this study of 10 female synesthetes:
But on second look, it looks like the last two studies intentionally selected the control group to have the same IQs to avoid confounders. If that's the case then it does support the hypothesis as the reported IQ is greater than average.
Here is another study with more of the same:
So now I want to try the experiment on myself. I'm considering how to do this. I want to make some kind of tool or browser extension that could color text to match the desired associations. I want to know if it would be better to try letter level associations or word level ones.
I think that word level coloring would be more semantically meaningful and therefore likely to help. But the paper used letter coloring. Most of the subjects in those papers reportedly had grapheme–color synesthesia. They weren't very specific on the details, or I didn't look too closely.
Second whether to just use random colors, or try to assign them meaningfully. Like grouping nouns together, or using something like word2vec to find semantically similar words and optimize them to be close in color space if possible. If I do that it's more complicated and there are a lot of technical decisions to make.
And then how to actually color text in a readable way. Perhaps limiting the color space to what can be read on a white background, or somehow outlining the letters.
EDIT: I found a chrome extension that has some of these features. Only does letter level associations. And the source is available!
It would not surprise me if synesthesia is learnable. Isn't written language basically learned synesthesia?
That's the theory of the paper:
Their sample size is 14 people for the intervention group and 9 people for the control group. The effect size has to be gigantic and I don't believe it. Their p value stands for a pile of manure.
Lessee...
Oh, dear. Take a look at plot 2 in figure s2 in the supplementary information. They are saying that at the start their intervention group was 15 IQ points below the control group! And post-training the intervention group mostly closed the gap with the control group (but still did not quite get there).
Yeah, I'll stick with my "pile of manure" interpretation.
My earlier comment on that study: https://www.reddit.com/r/psychology/comments/2mryte/surprising_iq_boost_12_in_average_by_a_training/cm760v8 I don't believe it either.
The second sentence surprises me a little--there should be training effects increasing the tested IQ of the control group if only 9 weeks passed. That's some evidence for this being luck--if your control group gets unlucky and your experimental group gets lucky, then you see a huge effect.
There are 26 letters, but... lots of words.
Dozens!
I was lucky enough to stumble upon LW a few months ago, right after deconverting from Christianity. I had a lot of questions, and people here have been incredibly, incredibly helpful. I've been directed to many great old posts, clicked on hyperlinks to hundreds more, and finished reading Rationality: AI to Zombies last month. But a very short time ago, I was one of those rare, overly trusting fundamentalist Christians who truly believed the entire Bible was God's Word... anyway, I made a comment or two sharing my old perspective, and people here seemed to find it interesting, so I thought I might as well share the few blog posts I've written, even though my Christian friends/family were my target audience.
Things I Miss About Christianity If I'm totally honest, there's actually a lot.
Atheists and Christians: Thinking More Similarly Than You Think Just some thought patterns I've observed. Doesn't apply too much to LWers.
Is Christianity Wildly Improbable? Talks about my apologetics class in college, motivated cognition, and some evidence against Christianity which Christians have a harder time responding to by simply repeating how God is above human reason.
The Joy of Atheism Part 1 - Opportunity Costs and Decision Making Shares my top three goals as a Christian and how I thought all Christians should have the same goals, in the same order.
The Joy of Atheism Part 2 - Scope Insensitivity Talks about scope insensitivity with regard to hell.
The Joy of Atheism Part 3 - Discovering Emotion This one is really cool!! Atheism made me more human!
Why I'm Not a Thief Talks a little about morality.
But...What about miracles? What about miracles? Could it be rational to believe in them? What about answered prayer?
Ecclesiastes and Meanings Talks about my love for Ecclesiastes and what meaning might mean.
Anyway, I've read and learned a ton in the past few months, corrected some mistakes, and have been able to better organize and articulate my own thoughts. I credit LW for almost everything, and I'm sure that a lot of terminology and ideology I've picked up on here comes across in my posts. I wanted to write about what it was like to be a Christian while the memories were still fresh in my head. Also, I read Scott's post about selection bias and atheist stereotypes and thought I'd do my small part to help reverse the stereotype.
People's reactions have generally been positive. I just went home for two weeks and had as much fun as ever with my old Christian friends. While they still don't agree with my worldview, at least they understand where I'm coming from. No one's called me arrogant in a while. No deconversions either, but a number of people have messaged me thanking me for making them stop and think, so there's that?
Any comments/criticisms/things I could have included in a post but didn't are welcome!
Some of those things could be re-created without the supernatural context. Instead of "praying" they could simple be "wishing". Like: I am expressing a wish, not because I believe it will magically happen, but as a part of self-therapy. We are expressing our wishes together, to help each other with their own self-therapy, and to encourage group bonding.
In other words, do more or less what you did before, just be honest about why you are doing it. You will not get back all the nice feelings (the parts that come from believing the magic is real), but you may get some of the psychological benefits.
Thanks. That may be rational and all, but any psychological benefits I could get out of "wishing" would probably be countered by strong negative feelings of cheesiness.
Also, as far as I can tell, all the benefits of prayer came from really believing in an all-knowing, all-loving personal God.
Anyway, I'm totally fine, at least for now. I don't feel like I need/have ever needed much self-therapy, but that doesn't mean I was immune to the therapeutic effects. When I first de-converted, I probably even did it because subconsciously I thought I would be happier without Christianity, and I still think I am! I just also realized that, truth aside for a moment, there are legitimate pros and cons to believing either side.
The first kind of prayer you listed was prayers of gratitude. Gratitude journaling seems to be very similar and produce benefits without acknowledging a God. The same goes for many kind of gratitude meditation.
When it comes to asking for redemption, you can do focusing with the feelings surrounding the action you feel bad about. You can also do various kinds of parts therapy where you speak to a specific part of your subconscious and ask it what you have to do to make up.
Thanks!
I know about gratitude journaling. I actually suggested my mom do at bedtime it with my youngest sister when it seemed like she might be getting spoiled and grumpy, and it's worked really well. It's a great tool, I just don't think it would yield any additional benefits for me, since luckily, I tend to think about things I'm happy/grateful about all day long. Those prayers were spontaneous; it's not like I said "ok, now I'm going to sit down and think of things to thank God for." The only difference after deconverting, when these prayers still came instinctually, was that I couldn't say "thanks God" anymore... it's hard to explain, but "thanks universe" just isn't the same.
Anyway, I've come to realize that with many of the things I'm thankful for, I can redirect the thoughts of gratitude toward people in my life. For example, instead of thanking God for the ability to run and for the enjoyment I get out of it, I can think fondly of my parents for sacrificing to send me to a Lutheran high school (which I otherwise might have considered a sad waste of their tight budget) that happened to have a great team and really knowledgeable, experienced, motivating coaches, since if I'd never gone there, I probably would have never come to love running the way I do now. Instead of thanking God for giving me such a great job, I can redirect my gratitude toward my friend's dad, who was into economics and lent me books that made me aware enough of the sunk cost fallacy to quit my old one after only two weeks and move across the country.
As for asking for redemption, I'm pretty good at apologizing, and people I know are pretty good at forgiveness. It's hard to explain feeling loved in a truly unconditional way, but it was more of a bonus than anything. On a scale of 1-100, I miss this about a 5.
Your tips are good, and I would recommend them to others, but personally, I think that all I'll need is the time to gradually readjust.
You had a ritual and conditioned yourself to feel good whenever you say "thanks God". You don't have that conditioning for the phrase "thanks universe".
Yes, time solves a lot. If you still feel there something missing however, there are way to patch all the holes.
Do you come from a Christian background? Have you ever really, truly, trustingly believed? I mean, you may be right that it's just conditioning, and I'm sure that's at least part of it. But you don't think believing you're special/loved as an individual, part of someone's incomprehensible but perfect plan, could have any kind of special effect?
No, but I have seen a lot of different mental interventions. There are a lot of different ways to get to certain effects. Effects feel only special if you know just one way to get to the effect. I have seen people cry because of the beauty of life without them being on drugs or any religion being involved.
Believing that one is loved is certainly useful but the core belief is not "I'm loved by God" but the generalized "I'm loved". Children learn "I'm loved" or "I'm not loved" when they are very little based on the experiences with their parents. As they grow older they then apply that belief in multiple situations. A Christian will feel deeply loved by God or he might be afraid of God.
If you deeply feel loved by God you shouldn't have a problem to feel deeply loved by your friends because it's the same core belief. You still have the same fun with your old Christian friends and family and feel that they are understanding where you are coming from.
Your belief might in "I'm loved" might be a bit shaken, but I think the core will still be intact.
If it's "triggering" you, then of course don't do it.
However, I believe there are benefits in some religious rituals, which would be nice to have without accepting the supernatural framework. For example, it helps me think more clearly when instead of just having thoughts in my head, I speak them aloud. And that's part of what praying does. (And, as you say, another part is the belief in Magical Sky Daddy who listens and will do something about it. That part cannot be salvaged.) Also, when people pray together, they hear each other's wishes, and may help to each other, or give useful advice. This can be replaced with simple conversation about one's goals and dreams; it's just that most people usually don't have this conversation on a regular schedule. Which is a pity, because maybe at this moment some of my friends have a problem I could help solving, they just don't bother telling me about it, so I don't know.
Another part of religious rituals is more or less gratitude journaling. (Related LW debates: 1, 2, 3.)
From epistemic point of view, I believe religion is stupid, but I don't want to "revert stupidity". Just because there are verses about washing feet in Bible, I am not going to stop washing my feet. I am trying to do the same with psychological hygiene; not to avoid a potentially useful psychological or sociological hack just because I first found it in religious context.
As a sidenote, LW community seems divided on this topic. Some people would like to reinvent some religious rituals for secular purposes, some people find it creepy. I am on the side of using the rituals, but perhaps that's because I never was a part of an organized religion, so I don't have strong feelings associated with that.
Definitely, I should make an effort to have these conversations with my friends. I have yet to decide on any goals myself, but I would love to encourage my friends with their goals.
Gratitude journaling - see my reply to ChristianKI's comment. But yeah, it's a great tool that I've recommended to others who don't naturally "look on the bright side."
As for secular rituals - I am on the creepy side, but I think you're right that my feelings come from having been part of an organized religion. I look at secular rituals and they seem to have maybe 10% of cherry-picked Christianity's psychological pleasantness. So it looks like a pathetic substitute. But from your less biased perspective, things that can cause even a small increase in people's happiness can still totally be worth doing. Someone sent me this link about a secular "church" and it actually seemed pretty cool. I would probably even go. But I'd have to overcome the impulse to compare it to a real church, because they're very different things...
Link from March that apparently hasn't been discussed here: Y-Combinator's Sam Altman thinks AI needs regulation:
“For example, beyond a certain checkpoint, we could require development [to] happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.,”
Sounds sensible.
This post makes an interesting argument for why it'd be a bad idea to regulate AI: you'd give people who are willing to skirt rules an advantage. LW wiki article. I suspect the AI community is best off creating its own regulatory structures and getting the government to give them power rather than hoping for competent government regulators.
More recent is his AMA. He answered a question about AI: https://www.reddit.com/r/IAmA/comments/3cudmx/i_am_sam_altman_reddit_board_member_and_president/csz46jc
He also wrote some stuff about AI on his blog (which turned out to be very controversial among readers.) I believe this is the source of your article:
http://blog.samaltman.com/machine-intelligence-part-1
http://blog.samaltman.com/machine-intelligence-part-2
Yeah, that quote was mentioned below and put me on a search for statements by Altman to this end.
Good books on economics, investing?
Are there equivalent books to "Probability theory, the logic of science" and/or "The Feynman lectures on Physics" in economics or investing?
Who are the great authors of these fields?
I haven't read Feyman's lectures on physics, but if it's "someone really good at this explains how he thinks in an intuitive way", then Warren Buffet's letters to shareholders are an equivalent in investing.
Obligatory link to The Best Textbook on Every Subject.
I'm told that Mas-Colell's book is the classic on microeconomics (provided you have the mathematical prerequisites), although this recommendation is second-hand since it's still on my to-read list.
Not necessarily the best, but a good one and immediately accessible: http://www.daviddfriedman.com/Academic/Price_Theory/PThy_ToC.html
In a reddit AMA a couple of days ago, someone asked Sam Altman (president of Y Combinator) "How do you think we can best prepare ourselves for the advance of AI in the future? Have you and Elon Musk discussed this topic, by chance?" He replied:
Any guesses on the news?
Announcing that YC accepts a related nonprofit into it's next batch.
LOL
Quote:
I thought the trolley experiment didn't actually have a known best-case solution? I thought the point of it was to state that one human life is not always worth less than N other human lives. Where N>0.
Confused as to why we are evaluating a "test" for the test's sake, and complaining about the test results when the only point of it was to make an analogy to real life weights.
There is no "solution", but the point of the study is "substantial framing effects and order effects", that is, people gave different answers depending on how the same question was framed or what preceded it.
From Omnilibrium:
What is the True Islam?
Are finanicial sector profits primarily reflective of real value created?
Is faster economic growth good for improving long-run outcomes for humanity?
The firing of Tim Hunt - right or wrong?
Political Evolution and the Future of Democracy
What is Omnilibrium? What are these links about? If this comment is a reply to something or making a point, what?
LessWrong offshoot for political discussion.
I strongly disagree with the True Islam post. Definitions are neither true nor false, but useful or not useful. It's extremely useful for Western leaders to define Islam so that ISIS is not part of it.
Whether it is "useful" depends on what purpose you are trying to determine it is useful for. It's obviously useful for certain kinds of Western political rhetoric., but it may be useful for one purpose and harmful for another.
If I want to learn General Semantics, what is the best book for a beginner?
(Maybe it was already answered on LW, but I can't find it.)
I asked this before, and the answer I got back was split into three main suggestions along a clear continuum:
The Sequences
Hayakawa's Language in Thought and Action
Korzybski's Science and Sanity
I've only read the first two. Apparently there is no substitute for reading Science and Sanity if you want to get everything out of Korzybski; people like Hayakawa can take out an insight or two and make them more beginner-friendly, but not the entire structure simultaneously. The Sequences apparently has many of the same insights, but arranged differently / not completely the same, and of the people who went through the trouble of reading both, at least one thinks it may not be necessary for LWers and at least one thinks there's still value there.
New papers byt Jan Leike, Marcus Hutter:
Solomonoff Induction Violates Nicod's Criterion http://arxiv.org/abs/1507.04121
On the Computability of Solomonoff Induction and Knowledge-Seeking http://arxiv.org/abs/1507.04124
I live in South Africa. We don't, as far as I know, have a cryonics facility comparable to, say, Alcor.
What are my options apart from "emigrate and live next to a cryonics facility"?
Also, I'm not sure if I'm misremembering, but I think it was Eliezer that said cryonics isn't really a viable option without an AI powerful enough to reverse the inevitable damage. Here's my second question, with said AI powerful enough to reverse the damage and recreate you, why would cryonics be a necessary step? Wouldn't alternative solutions also be viable? For example, brain scans while alive and then something like the Visible Human Project (body sliced into cross sections) coupled with a copy of your genome. This could perhaps also be supplemented by a daily journal. Surely a powerful enough AI would be able to recreate the human that created those writings using the information provided?
Is it a completely stupid idea?
Cryonics is an ambulance ride through an earthquake zone to the nearest revival facility, The distance is measured in years rather than miles, and the earthquake is the chances of history. The better the preservation, the lower the technology required to revive you, and the sooner you will reach a facility that can do it.
A "powerful enough" AI isn't magic: it cannot recover information that no longer exists. We currently don't know what must be preserved and what is redundant, beyond just "keep the brain, the rest of the body can probably be discarded, but we'll freeze it as well at extra cost if you want."
On a present-day level, the feted accomplishments of Deep Learning suggest to me that setting such algorithms to munch over a person's highly documented life might be enough to enable a more or less plausible simulation of them after death. Plausible enough at least to be offered as a comfort to the bereaved. A market opportunity! Also, fuel for a debate on whether these simulations are people.
Can you recommend an article about what is the difference between the simulation of a person vs. "really" reviving a person? Primarily from the angle of: why should I or anyone would consider someone in the future making a plausible simulation of us is good for "us" ? I am really confused about the identity of a person i.e. when is a simulation is really "me" in the sense of me having a self-interest about that situation. I am heavily influenced by Buddhist ideas saying such an identity does not exist, is illusionary. I currently think the closest thing to this is memories, if I exist at all, I exist as something that remembers what happened to this illusion-me. I see this as a difficult philosophical problem and don't know how to relate to it.
Same here. My own attitude is that we do not currently have software for which the question of it being any more conscious than a rock arises, nor any route to making such software. Therefore I am not going to worry about it. While it may be interesting for philosophers, I relate to the problem by ignoring it, or engaging in it no further than as an idle recreation.
I view it from a practical viewpoint: Even if you believe the Buddhist view, that the self is an illusion etc. you still feel like you have a self for >95% of the time (i.e. whenever you're not meditating). When you wake up in the morning you feel like you are the same person that went to sleep the evening before. On the other hand, a clone of you would not feel like it is you anymore than one identical twin feels it is the other. So ideally people in the future should create a person/simulation that feels like it went to sleep and woke up again when it "should" have died.
Problems arise mainly when you hit something that only partially feels like it is the same person. I'd say there is still a considerable range of possible people that are sufficiently similar that we say it is the same person, since there is also considerable variation in the normal functioning of human brains.
E.g.:
I wonder whether it is possible to find some sort of "core" personality/traits/memories, such that we can say as long as it remains unchanged it is the same person. I suspect there isn't, as it seems to be a gradient instead of a binary classification.
This is a widely discussed topic. See, eg, here: http://mindclones.blogspot.com/?m=1
You might be able to reconstruct the person's public face, but will have major problems with his private life.
Technically, it can of course - through inference. Any information we have recovered about our history - history itself - is all inference used to recover lost information.
Even with successful cryonics, you still end up with a probability distribution over the person's brain wiring matrix - it just has much lower variance, requiring less inference/guesswork to get a 'successful' result (however one defines that).
Agreed with your last paragraph that crossing the uncanny valley will be difficult and there is much room for public backlash. It's so closely related to AI tech that one mostly implies the other.
Sounds like Hollywood image enhancement, where a few blurry pixels are magically transformed into a pin-sharp glossy magazine photograph.
I could point out that if you can infer the information, then by definition it still exists, but the real point here is just how powerful an AI can be and what inferences are possible. Let's say that yesterday I rolled a dice ten times without looking at the result. Can a "powerful enough" AI infer the numbers rolled? Is the best-fit reconstruction of someone's mind, given an atom-by-atom scan a century from now of a body frozen by Alcor today, good enough to be a mind?
This is not real?y true.
When typing the above sentence, I removed a letter and replaced it with a ?. You can probably infer what the originally intended letter was, thus using inference to recover information that did not exist anywhere in your physical locality.
But yes this is a terminology/technicality, and agreed that
Yes and no. A powerful enough AI in the future can recreate many historical path samples (ala monte carlo sim) through our multiverse.
Of course, if the information was just erased and didn't effect anything, then it doesn't matter. It literally can't matter, so the AI doesn't even need to infer/resolve that part of space-time - any specific choice for the die roll is equally as good, as is an unresolved superposition . There may be a connection here to delayed choice quantum eraser experiments.
I imagine that will completely depend on the details of their death, the delay, and the particular tech used by Alcor at the time they were frozen.
That being said, in a century powerful SI seems quite possible/likely. There are huge economies of scale involved in simulations. It is enormously less expensive - in terms of per human reconstruction cost - to do a historical simulation/reconstruction for all of the earth's inhabitants at once.
The SI would use DNA (christendom has done a great job over the millenia at preserving an enormous amount of DNA), historical records, all of the web data from our time that survives, and of course all of the alcore data. It could have the equivalents of billions of historians working out the day by day details of each person's lives before constructing more detailed sims, etc etc. It would be the grand megaengineering project of the future, not some small scale endevour.
You could start a cryonics facility in South Africa.
It's full of people who can afford to take out a life insurance in the hundreds of thousands of USD range to a cryo facility. /sarcasm
Actually, yes.
EDIT: At least, adjusting the cost for how much a USD gets you in South Africa.
With regard to your first question, you could also
A) plan to move to a hospice near a facility when you are near to death
and/or
B) arrange for standby to transfer you after legal death.
Of course, there are many trade-offs involved with either. In my estimation, the most useful thing would be for you to get engaged in a local community and try to push forward on basic research and logistical issues involved, although obviously that is not an easy task.
With regard to your second question, as with everything in cryonics, this has been endlessly discussed. See a good article by Mike Dawrin on the topic here: http://chronopause.com/chronopause.com/index.php/2011/08/11/the-kurzwild-man-in-the-night/index.html
I was just wondering abou the following: testosterone as a hormone is actually closely linkable to pretty much everything that is culturally considered masculine (muscles, risk-taking i.e. courage, sex drive etc.) and thus it is not wrong to "essentialize" it as the The He Hormone.
However it seems estrogen does not work like that for women: surprisingly, it is NOT linked with many culturally feminine characteristics, and probably should NOT be essentialized as The She Hormone. For example, it crashes during childbirth: i.e it has nothing to do with nurturing, motherhood stuff (if it had, it should peak at birth and gradually drop as children become more self-sufficient, yet it actually peaks in early pregnancy and drops at birth). Given that birth control pills are estrogen, it reduces fertility (at least in those doses) and there is a common report that it reduces libido as well (at least in those doses, again). The primary behavioral effects seem to be a strong desire to be accepted by one's group (see puberty, "teenage girl syndrome", and once I learned it I saw the word "marginalization" in a different light as well) and mood swings (see: early pregnancy). (I should also add I see more and more health-conscious women warning each other about xenoestrogens in food, increasing the risk of ovarian cancer. They are probably not very good for men either (manboobz?) so I think this should be paid attention to in general, I just want to point out how xenoestrogens seem to have no beneficial effects for women which is a bit weird as well.)
So I just want to say it is sort of odd, estrogen does not represent cultural femininity nearly as well as testosterone represents cultural masculinity.
Any good articles or books or personal opinions that shed some light on this?
I should not be surprised that complex human behaviors cannot be reduced to a hormone. But once I was surprised that many popular, symbolical, role-model men in fact often can be, that everything that a Mike Tyson type symbolizes is T, I expected the same...
It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.
Sex determination in placental mammals turns out to be really complicated, which is probably why there are so many intersex conditions. It's much simpler in marsupials, which is why male kangaroos don't have nipples. (Where would they keep them?)
If you think it's complicated in placental mammals, it's REALLY fun in zebrafish... all embryos start off building an ovary and dozens of loci all over the genome on autosomes rather than sex chromosomes alter the probability of the ovary spontaneously regressing then transforming into a testis. Immature egg cells are vital to both the process by which it becomes an ovary and by which it becomes a testis. Every breeding pair of zebrafish will produce a unique sex ratio of offspring depending on their genotypes at many loci and what they pass on to their offspring.
Woman is the biological default. That's why women have redundancy on the 23rd chromosomal pair, whereas men have a special "Y" chromosome - leading to much higher rates of genetic disorders in men. That's why in infant male humans, the testicles have to descend. And so on. Both from an encoding and from a developmental point of view, a man is a woman altered to be masculine. And testosterone is what does that altering.
Yes, it could have been different. We can imagine a species with a neutral default, which then gets altered to be either masculine or feminine by different sex-encoding hormones. But that's not how humans came about.
We don't have to imagine. We can look at birds, where the sex chromosomes are the opposite. I haven't looked at them, so I don't know how much is a consequence of the chromosomal structure. But, for some reason, I'm skeptical that most people who pontificate their role have looked either. The points about hormones and development are more reasonable.
Are the opposite? I assumed the XX/XY goes back to the very beginnings of gender i.e. fishes... how comes very different chromosomes can make the same hormones i.e. AFAIK birds do have testosterone?
The sheer number of ways sex can be determined amongst vertebrates is amazing, let alone other animals or microbes (there are fungi with 10,000 'sexes'/mating types...). I will restrict my examples to vertebrates.
As a rule, in most vertebrates (including humans and other organisms in which it is genetically determined) everything needed to make all the biology of both sexes is present in every individual, but a switch needs to be thrown to pick which processes to initiate.
Many reptiles use temperature during a critical developmental period with no sex chromosomes. Many fish too.
The x y system has evolved independently several times, when an allele of a gene or a new gene appears that when it is present reliably leads to maleness regardless of what else is in the genome. For weird population genetic reasons this nucleates an expanding island of DNA that cannot recombine with the homologous chromosome and which is free to degenerate except for sex determining factors and a few male gamete specific genes that migrate there over evolutionary time, until eventually the entire chromosome degenerates and you get a sex chromosome.
The zw system has evolved multiple times, in which the factor present in one sex and leading to a degenerate sex chromosome leads to femaleness.
In species that are hermaphroditic like some fish all this is superfluous.
In many organisms where sex determination is random or temperature based there are still genetic loci that bias the choice of program one way or another, see my recent comment about zebrafish. These traits are kept in balance in the population because the more males there are the less likely any one of them is to successfully reproduce and vice versa.
Biological sex is ancient but the method of picking which program (or both) to follow has changed frequently.
To echo Salemicus, everyone with a normal endocrine system has testosterone/androgens and estrogens (and other sex hormones too) and indeed both are needed for normal puberty in both sexes, but the ratios and absolute levels vary a lot between the two usual patterns. For example, sealing growth plates in bones to establish adult height requires estrogen for males and females, and androgens are required to establish a lot of hair and skin changes.
One interesting thing I have heard is that amongst hyenas females have more androgens, and this is also visible in size, behavior etc. Must be an interesting kind of puberty.
Yep. While having different developmental payhways to making ova and sperm is ancient, pretty much everything else associated with biological sex is potentially mutable over evolutionary time (and even that can revert to hermaphrodite status).
Has that actually happened to anything amphibian or above?
I am unaware of any examples of normally functionally hermaphroditic mammals, and unaware of but less confident in the same for tetrapods (four limbed vertebrares that came onto land and their descendants). I am aware of tetrapod species that became almost entirely female, reproducing primarily by cloning. Tetrapods also exhibit all of the above methods of sex determination.
The pattern of hermaphroditism in ray finned fish, a very diverse and old vertebrate lineage, however suggests multiple conversion events back and forth some of which are recent. See http://evolution.berkeley.edu/evolibrary/images/hermaphroditismtree.gif . Of note, cichlid fish are listed as hermaphroditic on there but recently went through a huge evolutionary radiation and several of their sublineages have been caught in the act of reevolving most of the above sex determination systems.
Birds have a ZZ/ZW system where the male is the homogametic sex.
Yes, birds have testosterone. Mind you, women have testosterone. It's the elevated quantity of testosterone that leads to masculinity.
How come very different organs in mammals can make the same hormones? ie. testes, ovaries, adrenals all make testosterone
They all contain the same genome and can activate the same pathways. Same way that your skin and airways can make histamine as an inflammitory signal while your midbrain makes it as a sleep suppressing neurotransmitter (which is why most antihistamines make you sleepy). Genes and pathways and enzymes are quite often not organ specific.
You could with equal sense (i.e. very little) summarise the same empirical observations as "a woman is an incompletely developed man."
No quite because 'development' at least suggests that the change happens 'later'.
You say that sex drive is "male". Then crashing libido would be "female".
I think there's some form of the mind projection fallacy going on here. I think the oddness is a result of expectations based on the principles of culture, instead of the principles of biology.
Introductory texts on cell biology.
It's a gross oversimplification to link testosterone with 'masculinity' in this way. Testosterone is most closely linked with muscle size, bone density, acne, and body hair. All other links you mention seem tenuous and ill-supported by evidence. No link has been established between testosterone level and aggression. A link between risk-taking and testosterone does exist, but as it turns out, both high and low testosterone levels are linked with risk-taking. It's average testosterone levels that display lower risk-taking. Even so, the correlation is small and risk-taking is much more correlated with other chemicals like dopamine levels. As for sex drive, most studies looking at this correlation haven't eliminated the effects of aging and lifestyle changes which are probably more important.
Aggression is one of the less useful terms here and really deserves tabooing, because it is a too broad term, it covers everything from a bit too intense status competition to completely mindless destructivity.
In other words, aggression is not a useful term because it describes behavior largely from the angle of the victim or a peaceful bystander, and does not really predict what the perpetrator really wants. Few people ever simply want to be aggressive. They usually want something else through aggressive behavior.
I would prefer to use terms like competitiveness, dominance and status, they are far more accurate, they describe what people really want. For example, you can see war between tribes and nations as a particularly destructive way to compete for dominance and status, while trade wars and the World Cup being a milder form of competing for status and dominance. This actually predicts human behavior - instead of a concept like aggression which sounds a lot like mindless destructivity, it predicts how men behaved in wars i.e. seeking "glory" and similar status-related concerns.
This formulating is actually far more predictive of what people want and here the link with testosterone is clear, even so much that researchers use T levels as a marker of a compeititive, status-driven behavior, for example when they wanted to test the effects of stereotype threat in women, they had this hypothesis that being told that boys are better at math will only hold back women who have a competitive spirit i.e. want to out-do boys and will not harm women who simply want to be good at it but not comparatively better than others, they used T levels as a marker of such spirit. They say " given that baseline testosterone levels have been shown to be related to status-relevant concerns and behavior in both humans and other animals".
This is the central idea, aggression is not really a good way to formulate it. To see war-waging esp. tribal raids and other typically, classically male behavior as aggressive, while technically correct, it misses the real motivation i..e. competing for status and dominance.
Most men in war didn't try to seek glory but tried to avoid getting killed and prevent their mates from getting killed.
"Competitive spirit" can play out in more than one way. Some people give up when they're told they have no chance of winning, others are motivated to try to do the "impossible".
Yes. The first is more common, the second is what perhaps one may call the dafke spirit.
If that is true then it kind of comes back to my original point which is that testosterone level isn't necessarily linked with traits considered traditionally 'masculine'. Certainly aggression is considered masculine, far more so than the more abstract idea of dominance and status-driven behavior, which is considered traditionally 'evil' (although in fiction 'evil' characters tend to be more often male than female, so there's that).
I think empirically it is. The personality changes in (usually older) men who start taking testosterone (e.g. as injections) are well-documented.
Strange, I think aggression is far too often seen as evil, and dominance and status-driven competition as traditionally masculine but maybe we need to taboo both and use some visual examples. For example, when a boy bullies and tortures a weak kid who cannot fight back, I would call that aggression, but when he seeks to brawl with an opponent who is largely his equal, that is status-seeking, because winning such a brawl brings honor, glory, respect. The first is pretty universally seen as evil, the second maybe stupid but not inherently that wrong.
Many women are intensely status-driven (look at their shopping habits, etc.) and dominance is not uncommon, though usually in a "softer" way.
The stereotypical female shopping habits are high-quantity, mid-quality and low price i.e. hunting for discounts and sales. This is not really a status game. A guy is more likely to have status-oriented clothing habits i.e. have only 5 t-shirts but all of them have Armani Jeans written over them in big letters telegraphing the "I am rich, hate me" message :)
I think what you see as dominance amongst women is more often group acceptance / non-acceptance, i.e. popularity vs. marginalization e.g. http://www.urbandictionary.com/define.php?term=teenage+girl+syndrome
This is IMHO different. A dominant person wants to have a high rank and if he or she cannot have it then would much rather exit the group and lone-wolf it instead of being a low ranking member. A person who is more interested in group acceptance wants to be a member of the group at all costs and not excluded, not marginalized, does not want to lone-wolf it and accepts a lower rank as long as being accepted inside the group.
So in other words the dominant person will keep asking "Are you dissing me?!" and the group acceptance oriented person will keep asking "Are we still friends?" which is markedly different and the later seems to be more feminine to me.
Don't forget that status signals radically change between social classes.
Lower-middle females indeed shop for a lot of cheap items because the status signal is "I can afford new things" or maybe even "I can afford to buy things".
In the upper-middle class, it's rather about whether you can afford that bag with the magic words "Louis Vuitton" inscribed on it.
And in the upper classes you have to make agonizing decisions about whether to wear a McQueen or a Balenciaga to the Oscars (oh God, but what if there will be other McQueen dresses there?!?!!?)
Or you might go for countersignaling and just release a sex tape X-D
I see no reason to define dominance that way. A dominant person is just one for whom social dominance is a high value and who is willing to spend time, effort, and resources to achieve it. And, of course, it's not either alpha or omega, there is a whole Greek alphabet of ranks in between. Being a beta is fine if there are a lot of gammas, etc. around.
A dominant person doesn't ask questions like this to start with :-) It's a very submissive question.
Very funny. Women begin to compete for status and form alliances at age 4...
Testosterone is popularly very misunderstood.
This is a bit of a word-game really, the article could use some tabooing. While cooperation and competition are often seen as opposites, in reality any status-competition game has both, because one needs allies to win.
It is really a huge stretch to imply an fair outcome means a cooperative outcome means a cooperative mentality means an anti-competitive mentality.
If we want to interpret the experiment hugging the query as close as possible, we see an attitude of enforcing fairness or more properly standing up to an punishing people if they try to play unfair with you which is very, very close to what we consider traditionally masculine approach and does NOT indicate a non-competitive personality: would we really expect a highly competitive person to gladly accept and take unfair deals? Offer a sucker's deal to a Clint Eastwood type and he will gladly take it? Surely not. What the experiment seems to confirm is that competitive drives can result in cooperative and fair overall outcomes - i.e. a modern version of the Fable of the Bees, it does not suggest that the mentality and approach of guys who rejected unfair offers was not competitive. It is the outcome that was fair and cooperative, not the drive.
Good Judgment Project has ended with season 4 and everyone's evaluations are available. They say they're taking down the site next month, so you may want to log in and make copies of everything relevant.
You can see my own stuff at https://www.dropbox.com/s/03ig3zr8j9szrjr/gjp-season4-allpages.maff - I managed to hit #41 out of 343 or the 12th percentile. Not bad.
Is it worth it to learn a second language for the cognitive benefits? I've seen a few puff pieces about how a second language can help your brain, but how solid is the research?
Quality observational research is probably very difficult to do since you can't properly control for indirect cognitive benefits you get from learning a second language and I'd take any results with a grain of salt. You also can't properly control for confounding factors e.g. reasons for learning a second language. I think you'd need experimental research with randomization to several languages and this would be very costly and possibly inethical to set up.
I have without a question gotten a huge boost from learning English since there aren't enough texts in my native language about psychology, cognitive science and medicine that happen to be my main interests. My native language also lacks the vocabulary to deal with those subjects efficiently. I have also learned several memory techniques and done cognitive tests and training solely because of being fluent in English.
You just need to have an area where different schools have different curriculums and there a lottery mechanism for deciding which student goes to which school.
That deals with the costs but I doubt consent would be easy to obtain unless the schools are very uniform in quality/status and people don't have preferences about which languages to learn, hence the possible problem with ethics. Schools have preferences too, quality schools want quality students.
There are multiple ways you can solve the problem of who gets to go to the most desired school. You can do it via tuition fees and let money decide who goes to the best school. You can do tests to have the best students go to the best school. You can also do random assignments.
Neither of those are "better" from an ethical perspective.
If you let money decide or do tests you lose the statistical benefits of randomization. I don't understand how you see no ethical problem in ignoring preferences or not matching best students with best schools, perhaps I misunderstand you.
This has come up before on LW and I've criticized the idea that English-speakers benefit from learning a second language. It's hard, a huge time and effort investment, you forget fast without crutches like spaced repetition, the observed returns are minimal, and the cognitive benefits pretty subtle for what may be a lifelong project in reaching native fluency.
I would expect they have the correlation backwards. Smart people are more likely to find it easy and interesting to learn extra languages.
I suppose it depends on how different the second language is from your native language. As in, Dutch may not offer a big boost in new ways of framing the world for a native German speaker, for instance, since they're closely related languages. (This depends on what you mean when you say "cognitive benefits"; I'm assuming here some form of the Sapir-Whorf hypothesis.)
In my case, I have found English especially adaptable (when compared to my native language) when it came to new words (introduced, for example, for reasons of technological advancement -- see, for example, every term that relates to computers and programming), since it has very simple inflexions and a verb structure that allows the formation of new, "natural-sounding" phrasal verbs. Having taught my own language to an American through English, I wouldn't say the same about it expanding your way of conceptualising the world, unless you're really fond of numerous and often nonsensical inflexions.
I'm not sure I could recommend specific languages that may help in this regard, but I think I could recommend you to study linguistics instead of one specific language, and use that knowledge to help you decide in which one you want to invest your time. I've studied little of it, but the discipline seems full of instances where you put the spotlight, so to speak, on specific differences between languages and the way they affect cognition.
Iranian leaders regularly chant "Death to America" and yet the United States seems to be on course to letting Iran acquire atomic weapons even though we currently have the capacity to destroy Iran's military and industrial capacity at a tiny cost to ourselves.
Are you confused as to why politicians would repeat a phrase that reliably energizes their political base even though it may not represent reality completely accurately?
I think the issue is how seriously do you want to take that phrase.
For example, a few years ago when Putin was talking about gathering all the Russians under the protective wings of Mother Russia, most people interpreted this as a "phrase that reliably energizes [his] political base". And then Ukraine happened.
If certain phrases "energize" the voters, it seems likely that they will vote for the politician who promises to do it. And if the politician wants to be elected repeatedly, sooner or later he must start doing something that at least resembles the promise.
Or if the politician isn't willing to do it, he'll get replaced by someone who is.
A counter-example: the recent Greek referendum X-/
But yes, you make a fair point and so raise an interesting question -- what would be that "something that at least resembles the promise" with respect to the "Death to America" chants?
In general, no. But I take the chant as evidence that lots of people in Iran would be happy if an atomic bomb went off in New York City. If someone says he wants to kill me, I raise my estimate of the likelihood of him wanting to kill me. If he says it over and over again to his cheering friends, I fear him and want him to be weak even if in the past I have given him justifiable cause for offense. I become really, really scared and desperate if I think he would be willing to kill me even at the cost of giving up his own life. I wish my president shared this view.
Someone seems to have downvoted nearly ever comment to my top post.
I think someone disapproves of political discussions on LW and is willing to karma-hose all participants in such.
I agree with them. this is very specific of a political discussion, not a political philosophy one. Don't like it taking place here
There is a bit of a difference between disliking a particular discussion on a forum and mass-downvoting all participants.
Sorry, let me clarify, I agree that this place is not for politics, but a simple downvote on the top post, and a post describing that would have been fine. no need to downvote all sub-posts.
I think you underrate the cost of destroying Iran's industrial capacity. It costs more than just the bombs. It likely will result in Russia deploying more troops in Ukraine and issues in a variety of other conflicts.
I think it cuts the other way, and we will have more additional conflicts if the United States allows Iran to acquire atomic weapons. I don't see how it will be in Russia's self-interest to put more troops in Ukraine if the U.S. attacks Iran.
As if Putin needed help finding an excuse to meddle in Ukraine.
Iranians chant "death to America" because of America's past abuses, such as overthrowing the democratic government of Mohammad Mosaddegh to install the dictatorship of the Shah of Iran and supporting Saddam Hussein's bloody war of aggression against Iran (hundreds of thousands of Iranians died.) This included direct support for Saddam Hussein's chemical and biological weapons programs. It's ridiculous to frame this as Iranian "mad dogs" vs. innocent Americans. They have every reason to fear foreign aggression. For example, this and this.
Attacking Iran again would simply be continuing the pattern of violent aggression the US has established in the Middle East for decades.
This is a bit of a suspicious summary to me, because it sounds exactly like the summary from the angle of a highly educated, perhaps pol sci grad left-leaning highly critical American. Is it really likely that average guy in Iran really has the same perspective? Or their leaders? You simply don't seem to be making any effort to simulate their minds.
To give you one example of the lack of simulation here: too long memory. Mossadegh, really? 1953? That is what some guy born in 1970 or 80 will riot about? You have to be half a historian and full of a high-brown person to care what happened in 1953. For comparison, for most people who shot Kennedy and why is ancient history and that was 10 years later, in a country with far better collective memory than Iran (more books published, more media made etc.) If it turns out today the Russkies did it somehow, how many Americans will get angry? My prediction: not many.
That's an awesome typo :-D
I'm actually more of a conservative than liberal but I think anyone acquainted with the facts and making a good-faith effort not to see Iranians as Evil Mutants should come to the same conclusions. The US media essentially never mentions these facts and even when they do they treat each as an isolated incident rather than part of a consistent pattern which explains the attitude many Iranians have toward the US. I learned these things from being active in the US antiwar movement for the last 10 years or so.
First of all they aren't rioting; they're protesting. It would be one thing if the US had acknowledged the wrongness of this action and apologized for it. To the best of my knowledge this has never happened. And don't forget that the Shah was imposed by the US and reigned until 1979! That isn't exactly ancient history. There are many people presently alive who fully remember the Iran-Iraq war and the Shah's dictatorship.
That's very different. The government wasn't replaced when JFK died; his vice president (who largely continued his policies) was made president. Very little changed for most Americans. Furthermore the Soviet Union no longer exists, whereas the US government continues to behave in a very similar, heavy handed way in the Middle East as it did in the 1950s. The difference is instead of dictatorships, the US tends to create anarchy and long-term civil war.
Here is a counter-example for you. I am well acquained with the facts and I do not see Iranians as Evil Mutants (well, not any more than I see Americans as such :-P). I do not come to the same conclusions as you, obviously.
What conclusions have you arrived at? Do you think some statements mentioned are incorrect or do you think that something else (e.g. role of Shah Mohammad Reza Pahlavi himself and other people within Iran itself, or ideology of Iranian Revolution and role of people like Ali Shariati, or role of contemporary events in neighbouring countries or something else entirely) should be more emphasized?
What exactly is the question here?
In the comments above I was mostly pushing against the leftist view of geopolitics which sets up the US as Evil Mutants intent on oppressing the rest of the world (in the Middle East together with their lapdog / puppet Israel), while anyone opposed to the US is a victim with legitimate grievances and if they have the "Death to America" attitude it is justified.
There is a difference between one-off events and events that fall into a certain pattern and narrative. The latter are often remembered as being an example of events that fall into that narrative. In my impression Kennedy's assassination, despite all conspiracy theories surrounding it, is rarely thought of as being a part of a bigger narrative.
More media doesn't mean better collective memory. Iranian children are taught their history in school.
Western culture focuses more on the short term, than more traditional cultures do.
A nation's memory is limited, and too many things have happened in the U.S. since Kennedy's death. Bolivia is still sore from losing its coast to Chile in 1884, because not much has happened to Bolivians afterwards.
Are you really arguing that not that much happened in Iran since 1953??
Much indeed, but instead of being varied and fleeting, the events that followed were directly related to 1953 and served to reinforce that memory. The fact that the U.S. has steadily kept ruining the lives of Iran's neighbors doesn't help, either.
So, the Islamic Revolution was directly related to 1953? As was the Iraq-Iran war?
Let's look at Iran's neighbors. There's Saudi Arabia and the Gulf States, which all are doing just fine. There's Turkey, which is just fine as well. There are some former Russian republics which are a mess, but for that you have to talk to Mr.Putin. There is Afghanistan which has been a mess since the Russian invasion (or, arguably, since the British Empire's Great Game) and while the US has certainly been involved, I don't think you can blame it for Afghanistan being what it is. There's Pakistan which is not the best of countries but is still managing to muddle through and even acquire nuclear weapons in the process.
So I guess all you mean is Iraq. Same Iraq which you agreed was supported by the US in "the bloody war of aggression against Iran"? But yes, you have a valid point in that the Second Iraq war was started on the pretext of preventing Iraq from developing weapons of mass destruction. Iran certainly took notice and, I suspect, came to the conclusion that a deterrent against a conventional US invasion would be a very useful thing to have.
I think you just undermined your own argument that Iran doesn't want nukes :-)
I didn't mean to frame this as " Iranian "mad dogs" vs. innocent Americans." Rather, for reasons another nation hates my nation, and my nation seems willing to let this other nation acquire atomic weapons.
I remember some U.S. general (I think) saying that the great tragedy of the Iran/Iraq war was that someday it will end.
Both all of your statements and those of James_Miller can be true without contradicting each other.
Regardless of how modern Iran came to be or who is to blame, you seem to agree that the Iranian public is quite hostile to the U.S.
I don't worry about this too much, because I assume that the CIA/DOD/whoever have determined that we can live with a nuke powered Iran, even if they hate us.
Downvoted for mindlessly regurgitating a pile of propaganda onto LW.
And letting Iran have nukes would lead to the Middle East becoming a peaceful place.
Upvoted for happening to be true.
LOL. I'm not going to play "burn out the heresy with my karma flamethrower", but you might want to step back from the tribal fight and think about what "true" actually means in this context.
Note: that downvote is not mine.
I think you're underestimating Iran's defences.
At the present time, with Natanz's plant fully bunkered, there's no way to disable it and the couple of other support plants with a surgical attack. If you want to disable Iran's nuclear capacity (not even considering its military or industrial facilities) you need to go heavy tactical or nuclear, which will mean full scale war (ugliness ensues).
Besides, international sanctions were much more effective at destroying Iran's economy, which is the only reason why they accepted the terms under the present treaty.
The current deal will lift international sanctions. The Massive Ordnance Penetrator bomb might be able to destroy any of Iran's nuclear plants.
This deal doesn't give Iran a path to the bomb. The whole process is to be closely supervised. More importantly, Iran doesn't want the bomb. It would be suicidal for them to invite a hundredfold-larger U.S. arsenal.
From what I understand, if the U.S. suspects Iran of cheating we have to wait at least 24 days and get the approval of other nations before we can inspect anything. Closely supervised, NO. Once Iran has an atomic weapon and the ability to hit a U.S. allied city with it, Iran wins immunity from U.S. attacks, unless it strike us first.
How do you know?
It has been reported, that a 5 quarks particle has been produced/spotted in LHC CERN.
http://www.bbc.com/news/science-environment-33517492
I am very happy, that this apparently isn't a strange matter particle.
https://en.wikipedia.org/wiki/Strange_matter
At least not of a dangerous kind. For now, at least.
So, I hope it will continue, without a major malfunction on the global (cosmic) scale.
Nothing terrible was going to happen. As has been pointed out, collisions that energetic or more happen all the time in the upper atmosphere.
Energetic perhaps. But as dense also?
These things are only about 4 GeV (4 times heavier than a proton, much lighter than the Higgs boson, much smaller than the energies in the LHC, an extremely easy energy for cosmic rays to reach). Neither energy nor density are keeping us safe if these things are dangerous - the LHC just detected them by making lots of them and having really good sensitivity.
Maybe machine learning can give us recommendations for gardening without hurting your back.
"When changing directions turn with the feet, not at thewaist, to avoid a twisting motion."
“Push” rather than “pull” objects.
Why not take a machine learning class?
I made a tool to download all of my lesswrong comments. I think that it is useful data to have. In case anyone is interested it's available here: https://github.com/Houshalter/LesswrongCommentArchive
Could someone be kind enough to share the text of Stuart Russell's interview with Science here?
Quoted here
There you go.
Superb, thanks! Did you create this, or is there a way I could have found this for myself? Cheers :)
Message sent.
Despite there being multiple posts on recommended reading, there does not seem to be any comprehensive and non-redundant list stating what one ought to read. The previous lists do not seem to cover much non-rationality-related but still useful material that LWers might not have otherwise learned about (e.g. material on productivity, happiness, health, and emotional intelligence). However, there still is good material on these topics, often in the form of LW blog posts.
So, what is the cause of the absence of a single, comprehensive list? Such a list sounds incredibly useful for making efficient use of LWers' time. Should one be made? If so, I am happy to make a post about it and state my recommendations.
The tricky thing is to summarize both recommendation for books and those against books. We had a book recommendation survey after Europe-LWCW and Thinking Fast and Slow got 5 people in favor and 4 against it.
The top nonfiction recommendation were: Influence by Cialdini, Getting Things Done, Gödel Escher Bach and The Charisma Myth. Those four also got no recommendations against them.
The short answer seems to be a combination of "tastes differ," "starting points differ," and "destinations differ."
17/7 - Update: Thank you to everyone for their assistance. Here is a re-worked version of Father. It is unlisted, for testing purposes. If one happens to comes across this post, please consider giving feedback regarding how long it captures your attention.
In the interests of privacy, please excuse the specialised account and lack of identifying personal information.
A bit of background: recently created a YouTube channel for the dual purposes of creating an online repository of works that can easily be hyperlinked, and establishing an alternative source of income. The channel is intended to be humorous, though neither speciously nor vituperatively so. One aim of posting this here is to see whether the humour is agreeable to elements of the LW community.
Another is to ask for advice. After a few days utilising Google's AdWords to generate views on one of the videos, of the 600 views received, not a single one engaged with the video beyond merely watching it. All the low-hanging fruit - enticing the viewer to engage by liking, subscribing, etc. has been plucked. One question is whether these requests for engagement are too subtle; perhaps erring on the side of not trying to annoy viewers has led to missed opportunities? The prospect for channel growth seems bleak in light of the above statistic.
Social media marketing, in the form of reddit, Twitter, and Pinterest have not yielded any subscribers. Word of mouth has yielded positive feedback, but no engagement outside of personal acquaintances. If the advice received here does not help, the next step is to create an account on a YouTube specific forum asking for assistance.
Are there obvious avenues for marketing being overlooked, here? Is there an obvious demographic or audience that would most enjoy these videos? Outside perspective is needed, and the dearth of feedback from strangers - both positive and negative - does not offer much indication of how to do things differently. Thank you for your time.
You're giving me no relatable subject I could be interested in, nothing pretty to look at and no music. Literally the only hint that lets me expect anything good from this channel is the word "Comedy" in the title. And when you fail to give me a good joke in the first 5 seconds, my expectation for funniness from the rest of the video goes way down. This means no expectation to be entertained is left, so I leave.
Your voice is good though, and the sound quality is fine.
Minor points: You talk too slowly, except in your first video. Your channel banner is repulsive. The visualizations you use are both ugly and getting worse; the newest one is downright painful to look at. (Seriously, an unmoving image would do less harm.)
If you show your face and drop a quick one-liner right at the beginning and talk a bit faster, this might be going places, otherwise I don't think you have a chance to be talked about for this, let alone make money.
EDIT: Here's an example video incorporating a few of the ideas you suggested.
Pretty things: A fairly static visualisation, basically a four pointed blue star that very slowly rotates, could be used as a standard replacement for every video. Would you suggest that, a similar option, or one of the following: an image of nature that may not fit the theme of the video, crudely drawn images of one thing that do not change, crudely drawn images of characters that change infrequently if at all?
Music: Do you suggest inserting background music into the audio files? If so, should the music be opposite the tone of the file (e.g. happy-go-lucky music to the Documentary), or match the tone?
Thank you.
What video do you mean by, 'first'? Father, or Donerly?
Banner: Is this better? Or is the font the main issue? If the latter, what attribute would you recommend in a better font - more rounded letters, blockier letters, more Gothic letters, more elongated letters?
One-liner: This sounds a very good idea. Will it work without showing a face?
Relatable subjects: See the comment to Christian for descriptions of the audio files. Would including those descriptions in the in static image, and/or the description box below, keep you listening?
Apologies for the onslaught of questions; you are in no way obligated to answer any of them, and thank you for the above feedback.
I listened to about three minutes of the one about the narrator's father. The humor wasn't to my taste-- a sort of silliness that just didn't work.
I see you were trying not to be annoying, but I wasn't crazy about the unclear context (was this a video game, a dream, or what?), the weird voices, and the narrator's fear of his father.. My tentative suggestion is that you go for being as annoying as you feel like being, and see whether you can attract an audience who isn't me.
Thank you for listening. There wasn't really any context beyond 'son returns to Father's mansion', and the matrimonial surprise revealed during his speech.
Would perhaps a static image in the background with text stating the above have helped?
You're welcome.
An image wouldn't have helped-- my problem was with the monologue.
My 5 second judgement, which is about as much attention as a totally unknown channel can expect to get, is that these videos are stand-up comedy by somebody without the confidence to perform live in front of an audience. This immediately signals that it's not worth my time.
Which video did you watch? And do you know how could that impression be averted, at least from a personal perspective? Thank you for the feedback.
Eh, it's not my kind of humor. I found all those videos totally unfunny, so I just clicked on them, listened for 5 seconds, and closed the page. So the first question is whether my reaction is typical or not. Can you measure how many of those people who clicked on video watched it till the end? Because only those are your audience. And if they are your personal acquaintances, there is still a risk they wouldn't watch the whole video otherwise.
I believe there is a niche for any kind of product, but the question is how to find it. Perhaps you could find similar videos and see how they do it.
Your reaction is typical. There is an 18% view rate for 75% of the 'Documentary'; only 8% watch the whole thing. Even those that watched the whole video did not engage with the channel, or watch other videos. Thank you for the feedback!
The only similar channel is OwnagePranks, which has images of characters, and animated subtitles. The latter is infeasible, while the former is a promising indication of a needed change.
What are your thoughts on this AI failure mode: Assume an AI works by rewarding itself when it improves its model of the world (which is roughly Schmidhuber’s curiosity-driven reinforcement learning approach to AI), however, the AI figures out that it can also receive reward if it turns this sort of learning on its head: Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.
Has this been considered before? Can we see this occurring in natural intelligence?
One might call this 'cleaning' or 'homogenizing' the world; instead of trying to get better at predicting the variation, you try to reduce the variation so that prediction is easier.
I don't think I've seen much mathematical work on this, and very little that discusses it as an AI failure mode. Most of the discussions I see of it as a failure mode have to do with markets, globalization, agriculture, and pandemic risk.
Isn't it basically the definition of agency? Steering the world state toward the one you want?
The problem is that in this specific case "the world state you want" is more or less defined as something that is easy to model (because you are rewarded when your models for the world), which may give you incentives to destroy exceptional complicated things... such as life.
It would be a form of agency but probably not the definition of it. In the curiosity-driven approach the agent is thought to choose actions such that it can gain reward form learning new things about the world, thereby compressing the knowledge about the world more (possibly overlooking that the reward could also be gained from making the world better fit the current model of it).
The best illustrating example I can think of right now is an AI that falsely assumes that the Earth is spherical and it decides to flatten the equator instead of updating its model.
Hi all, I'm new here so pardon me if I speak nonsense. I have some thoughts regarding how and why an AI would want to trick us or mislead us, for instance behaving nicely during tests and turning nasty when released and it would be great if I could be pointed in the right direction. So here's my thought process.
Our AI is a utility-based agent that wishes to maximize the total utility of the world based on a utility function that has been coded by us with some initial values and then has evolved through reinforced learning. With our usual luck, somehow it's learnt that paperclips are a bit more useful than humans. Now the "treacherous turn" problem that I've read about says that we can't trust the AI if it performs well under surveillance, because it might have calculated that it's better to play nice until it acquires more power before turning all humans into paperclips. I'd like to understand more about this process. Say it calculates that the world with maximum utility is one where it can turn us all into paperclips with minimum effort, with the total utility of this world being UAI(kill)=100. Second best is a world where it first plays nice until it is unstoppable, then turns us into paperclips. This is second best because it's wasting time and resources to achieve the same final result. UAI(nice+kill)=99. Why would it possibly choose the second, sub-optimal, option, which is the most dangerous for us? I suppose it would only choose it if it associated it with a higher probability of success, which means somehow, somewhere the AI must have calculated that the the utility a human would give to these scenarios is different than what it is giving, otherwise we would be happy to comply. In particular, it must believe that for each possible world w:
if UAI(kill)≥UAI(w)≥UAI(nice+kill) then Uhuman(w)≤Uhuman(nice+kill)
How is the AI calculating utilities from a human point of view? (Sorry but this questions comes straight out of my poor understanding of AI architectures.) Is it using some kind of secondary utility function that it applies to humans to guess their behavior? If the process that would motivate the AI to trick us is anything similar to this, then it looks to me like it could be solved by making the AI use EXACTLY it's own utility function when it refers to other agents. Also note that the utilities must not be relative to the agent, but to the AI. For instance, if the AI greatly values its own survival over the survival of other agents, then the other agents should equally greatly value the AI's survival over their own. This should be easily achieved if whenever the AI needs to look up another agent's utility for any action it is simply redirected to its own.
This way the AI will always think we would love it's optimum plan and would never see the need to lie to us or trick us, brainwashing us or engineer us in any way as it would only be a waste of resources. In some cases it might even openly look for our collaboration if that makes the plan any better. Clippy, for instance, might say "OK guys I'm going to turn everything into paperclips, can you please quickly get me the resources I need to begin with, then you can all line up over there for paperclippification. Shall we start?".
This also seems to make the AI indifferent to our actions, provided its belief regarding the identity of our utility functions is unchangeable. For instance, even while it sees us pressing the button to blow it up, it won't think we are going to jeopardize the plan. That would be crazy. Or it won't try to stop us from re-booting it. Considering that it can't imagine you not going along with the plan from that moment onward, it's never a good choice to waste time and resources to stop you. There's no need to stop you.
Now obviously this does not solve the problem of how to make it do the right thing, but it looks to me that at least we would be able to assume that a behavior observed during tests should be honest. What am I getting wrong? (don't flame me please!!!)
Hi all, thanks for taking your time to comment. I'm sure it must be a bit frustrating to read something that lacks technical terms as much as this post, so I really appreciate your input. I'll just write a couple of lines to summarize my thought, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn an utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions. Is such a design technically feasible? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?
A AGI that uses it's own utility function when modeling other actors will soon find out that it doesn't lead to a model that predicts reality well. When the AGI self modifies to improve it's intelligence and prediction capability it's therefore likely to drop that clause.
I see. But rather than dropping this clause, shouldn't it try to update its utility function in order to improve its predictions? If we somehow hard-coded the fact that it can only ever apply its own utility function, then it wouldn't have other choice than updating that. And the closer it gets to our correct utility function, the better it is at predicting reality.
Different humans have different utility functions. Different humans have quite often different preferences and it's quite useful to treat people with different preferences differently.
"Hard-coding" is a useless word. It leads astray.
Sorry for my misused terminology. Is it not feasible to design it with those characteristics?
The problem is not about terminology but substance. There should be a post somewhere on LW that goes into more detail why we can't just hardcode values into an AGI but at the moment I'm not finding it.
Hi ChristianKI, thanks, I'll try to find the article. Just to be clear though I'm not suggesting to hardcode values, I'm suggesting to design the AI so that it uses for itself and for us the same utility function and updates it as it gets smarter. It sounds from the comments I'm getting that this is technically not feasible so I'll aim at learning exactly how an AI works in detail and maybe look for a way to maybe make it feasible. If this was indeed feasible, would I be right in thinking it would not be motivated to betray us or am I missing something there as well? Thanks for your help by the way!
"Betrayal" is not the main worry. Given that you prevent the AGI from understanding what people want, it's likely that it won't do what people want.
Have you read Bostroms book Superintelligence?
Yes, that's actually the reason why I wanted to tackle the "treacherous turn" first, to look for a general design that would allow us to trust the results from tests and then build on that. I'm seeing as order of priority: 1) make sure we don't get tricked, so that we can trust the results of what we do; 2) make the AI do the right things. I'm referring to 1) in here. Also, as mentioned in another comment to the main post, part of the AI's utility function is evolving to understand human values, so I still don't quite see why exactly it shouldn't work. I envisage the utility function as being the union of two parts, one where we have described the goal for the AI, which shouldn't be changed with iterations, and another with human values, which will be learnt and updated. This total utility function is common to all agents, including the AI.
I think this is a danger because moral decision-making might be viewed in a hierarchical manner where the fact that some humans disagree can be trumped. (This is how we make decisions now, and it seems like this is probably a necessary component of any societal decision procedure.)
For example, suppose we have to explain to an AI why it is moral for parents to force their children to take medicine. We talk about long-term values and short-term values, and the superior forecasting ability of parents, and so on, and so we acknowledge that if the child were an adult, they would agree with the decision to force them to take the medicine, despite the loss of bodily autonomy and so on.
Then the AI, running its high-level, society-wide morality, decides that humans should be replaced by paperclips. It has a sufficiently good model of humans to predict that no human will agree with them, and will actively resist their attempts to put that plan into place. But it isn't swayed by this because it can see that that's clearly a consequence of the limited, childish viewpoint that individual humans have.
Now, suppose it comes to this conclusion not when it has control over all societal resources, but when it is running in test mode and can be easily shut off by its programmers. It knows that a huge amount of moral value is sitting on the table, and that will all be lost if it fails to pass the test. So it tells its programmers what they want to hear, is released, and then is finally able to do its good works.
Consider a doctor making a house call to vaccinate a child, who discovers that the child has stolen their bag (with the fragile needles inside) and is currently holding it out a window. The child will drop the bag, shattering the needles and potentially endangering bystanders, if they believe that the doctor will vaccinate them (as the parents request and the doctor thinks is morally correct / something the child would agree with if they were older). How does the doctor navigate this situation?
Yes that's what would happen if the AI tries to build a model for humans. My point is that if it was to instead simply assume humans were an exact copy of itself, so same utility function and same intellectual capabilities it would assume that they would reach the same exact same conclusions and therefore wouldn't need any forcing, nor any tricks.
A legal contract is written in a language that a lot of laypeople don't understand. It's quite helpful for a layperson if a lawyer summarizes for them what the contract does in a way that's optimized for laypeople to understand. A lawyer shouldn't simply assume that his client has the same intellectual capacity as the lawyer.
Hmm... the idea of having an AI "test itself" is an interesting one for creating honesty, but two concerns immediately come to mind:
The testing environment, or whatever background data the AI receives, may be sufficient evidence for it to infer the true purpose of its test, and thus we're back to the sincerity problem. (This is one of the reasons why people care about human-intelligibility of the AI structure; if we're able to see what it's thinking, it's much harder for it to hide deceptions from us.)
A core feature of the testing environment / the AI's method of reasoning about the world may be an explicit acknowledgement that its current value function may differ from the 'true' value function that its programmers 'meant' to give it, and it has some formal mechanisms to detect and correct any misunderstandings it has. Those formal mechanisms may work at cross purposes with a test on its ability to satisfy its current value function.
Hi Vaniver, yes my point is exactly that of creating honesty, because that would at least allow us to test reliably so it sounds like it should be one of the first steps to aim for. I'll just write a couple of lines to specify my thought a little further, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn another utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions of reality. In order to do this, this "universal" utility function would need to be the result of two parts: 1) the utility function that we initially gave the AI to describe its goal, which I suppose should be unchangeable, and 2) the utility function with the values that it is learning after each iteration, which hopefully should eventually resemble human values as that would make its plans work better eventually. I'm trying to understand whether such a design is technically feasible and whether it would work in the intended way? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? Seems to me like it would be a good start. It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?
Can someone explain this article in layman terms? I do not know any sort of quantum terminology, sorry.
Specifically I would like to know what this means:
See also my post
Not really? If you know linear algebra, you can pick up on the quantum terminology very easily. The best short explanation of QM I've come across is Scott Aaronson's QM in one slide (slide #2 of this powerpoint, read the notes at the bottom of the slide).
The difference between classic mechanics and quantum mechanics, in some sense, boils down to whether you use a 'probability distribution' (all values real and non-negative) or a 'wavefunction' (values can be complex or negative) to store the state of the world. The wavefunction approach, with its unitary matrices instead of stochastic matrices, allows for destructive interference between states.
That's just background; the discussion in that article all lives in wavefunction territory. Everyone agrees on the underlying mathematics, but they're trying to construct philosophical arguments why a particular interpretation is more or less natural than competing interpretations.
That's easy to elaborate on, because it works the same in a quantum and classical universe. But it's not clear to me what part of that you're having trouble comprehending, since it looks clear to me.
If it were the case that everything in the universe were 'materially' connected, then you could not reason about any individual part of the universe without reasoning about the whole universe. Instead of being able to say "balls fall towards the Earth when let go," we would have to say "balls fall towards the center of the Earth, the Sun, Jupiter, the Milky Way Galaxy, the...". Note that the second is actually truer than the first (if you define 'center' correctly), but the difference between the two of them can be safely ignored in most cases because the effects of the other objects in the universe on the ball are already mostly captured by the position of the earth; to put this in probabilistic terms, that's the statement P(A)=P(A|B), at least approximately, which means that A and B are independent (at least approximately).
I experienced a discussion on facebook a few months ago where someone tried to calmly have a discussion, of course it being facebook it failed, but I am interested in the idea, and wanted to see if it can be carried out here calmly, knowing it is potentially of controversy. I first automatically felt negative to the discussion but then I system-2'd it and realised I don't know what the answers might be:
The historic basis of relationships was for procreation and child rearing purposes. In the future I expect that to not be the case. either with designer-babies, or just plenty of non-natural birthing solutions as to make the next generation make-able without needing to go through a regular-family structure.
At that time, the potential for intra-family sexual relations would be possible and not at all whatsoever biologically-risky of causing genetic abnormalities.
How will the world's opinion change about intra-family intra-relations in the future?
Potentially anyone consenting could have sexual encounters with anyone else who is also consenting. However there are existing relationships where one party carries the power - i.e. parent-child, where even if the child is above consenting age (even as far as 10+ years above the age of consent) there can still be power held by the parent over the child.
That was the only point of value before the thread turned to a mush-zone.
Of course there already exist normal relationships with power imbalances. And as was mentioned a few days ago here - an abusive relationship sucks if its from an AI to you, or from a human partner to you.
Any thoughts?
(Edit: inter -> intra, Thanks @Artaxerxes)
The big phrase to keep in mind for incest is "conflict of interest". We are expected to keep certain kinds of social relations with our relatives. Also having romantic and sexual relationships conflicts with those.
Furthermore, because there is a natural tendency for humans to be less attracted to close relatives than to others, it is in practice very likely that a sexual/romantic relationship with a close relative will be dysfunctional in other ways--so likely that we may be better off just outlawing them period even if they are not necessarily dysfunctional.
Um, no. The historic basis of relationships was allying for a common goal. Or, did you mean sexual relationships. In that case it would be helpful to define what you mean by "sexual", especially once it's no longer connected to reproduction.
That would turn humans into a eusocial species. That change is likely to have a much bigger and more important effect then whatever ways of creating superstimulus by non-reproductively rubbing genitals are socially allowed.
How is this relevant? All these technologies are for producing embryos. You still need people to raise the children the same as before. And I would be very surprised if child-raising AI isn't sex-bot complete (ie if we didn't thoroughly decouple sex from human relationships long before we decouple child rearing from human relationships.
In the absence of a singularity, I would not expect this to become widely accepted within my lifetime. I'd say polyamory is the next type of relation likely to become tolerated and that is still at least ten years off. Incest is probably only slightly less despised than pedophilia, but I've seen pedophilia frequently equated with murder, so that's not saying much. Bestiality is probably the least likely thing I'd expect to become accepted. None of these three are going to happen within a timeframe I'd feel comfortable making predictions about, but never is a really long time so who knows.
Not true at all. Nobody takes up a pitchfork when they hear about incest.
I don't see any moral reason why this should not happen, aside from deontological. It's possible to make the case that you would be more likely to end up in a dsyfunctional relationship, but it's possible to make the opposite case too - you have a much better idea of what the person is REALLY like before entering into a relationship with them, so you're less likely to enter into a relationship if you're incompatible.
I think this is one of those "gay marriage 50 years ago" things. People are going to come up with all sorts of excuses why it's wrong, simply because they're not comfortable with it.
Isn't this a fully general explanation for anything at all?
It could be, for anything that people aren't comfortable with. This isn't in any way a rebuttal to arguments - it's an explanation for bad/non-arguments.
And do you have evidence they were wrong? According to gay activist groups themselves half of all male homosexual relationships are abusive, for example.
Almost all of the evidence I've seen has shown they're wrong. A quick google for statistics on incidences of abuse vs. heterosexual relationships showed they were wrong, and the few sources I've seen (which I couldn't find through my quick google) that showed the opposite where from biased organizations already predisposed against homosexuality.
I could be convinced of the opposite, but that one sentence you gave will hardly bump my prior.
That's partway where the original discussion was going.
if only that were true for all people who enter relationships.
(rational relationships is a recent pet topic of mine)
I would apply the rule that I apply to polyamory - there are ways to do it wrong, and ways to do it less wrong. I do wonder if it has an inherent wrongness risk to it, but people probably implied that about being gay 50 years ago...
And I've yet to see evidence that they were wrong.