Misunderstandings and ignorance of GCTA seem to be quite pervasive, so I've tried to write a Wikipedia article on it: https://en.wikipedia.org/wiki/GCTA
I think you need to read up a little more on behavioral genetics. To point out the obvious, besides adoption studies (you might benefit from learning to use Google Scholar) and and more recent variants like using sperm donors (a design I just learned about yesterday), your classic twin study design and most any 'within-family' design does control for parental actions, because they have the same parents. eg if a trait is solely due to parental actions, then monozygotic twins should have exactly the same concordance as dizygotic twins despite their very different genetic overlaps, because they're born at the same time to the same parents and raised the same.
More importantly, the point of GCTA is that by using unrelated strangers, they are also affected by unrelated parents and unrelated environments. So I'm not sure what objection you seem to have in mind.
I used ingres's excellent LW 2016 survey data set to do some analyses on the extended LW community's interest in cryonics. Fair warning, the stats are pretty basic and descriptive. Here it is: http://www.brainpreservation.org/interest-in-cryonics-from-the-less-wrong-2016-survey/
A good post in a generally good blog. Samples:
How big is your filter bubble? What’s in it? What’s outside it? Okay, next question: how can you tell?
...
...Culture, in the part of the world in which I’ve been, and, for all I know, in other parts as well to which I cannot speak, has two rough parts: the Mainland and the Isles.
The Mainland is what calls itself the “mainstream” or “normal” culture.
You know… Mundania.
The Isles are everything else. Everything that’s not “mainstream” is an island.
Nobody knows how many Isles there are. They are wholly and utterl
Just to forestall confusion, that ending is not the ending of the SSC post, but the (near-)ending of the post Lumifer linked to. (In particular, Scott is not calling himself autistic.)
Any advice on what is the best way to buy index funds and/or individual stocks? Particularly for people in the UK?
I know this has probably been asked before on a 'basic knowledge' thread, but I can't find the answer.
What was the result of the IARPA prediction contest (2010-2015)?
Below I present what seem to me very basic questions about the results. I have read vague statements about the results that sound like people are willing to answer these questions, but the details seem oddly elusive. Is there is some write-up I am missing?
How many teams were there? 5 academic teams? What were their names, schools, or PIs? What was the “control group”? Were there two, an official control group and another group consisting of intelligence analysts with access to classified infor...
Following the usual monthly linkfest on SSC, I stumbled upon an interesting paper by Scott Aaronson.
Basically, he and Adam Yedidia created a Turing machine which, from ZFC, cannot be proved to stop or run forever (it will run forever assuming a superset of said theory).
It is already known from Chaitin incompleteness theorem that every formal system has a limit complexity length, over which it cannot prove or disprove certain assertions. The interesting, perhaps surprising, part of the result is that said Turing machine has 'only' 7918 states, that is a reg...
Reminiscing over one of my favourite passages from Anathem, I've been enjoying looking through visual, wordless proofs of late. The low-hanging fruit is mostly classical geomety, but a few examples of logical proofs have popped up as well.
This got me wondering if it's possible to communicate the fundamental idea of Bayes' Theorem in an entirely visual format, without written language or symbols needing translation. I'd welcome thoughts from anyone else on this.
The GRIM test — a method for evaluating published research
Testing the mean...
(epistemic status: Ruminations on cognitive processes by non-expert.)
I have a question tangential to AI safety about goal formation. How do goals form in systems that do no explicitly have goals to begin with?
I tried to google this and didn't find answers neither for AI systems nor for neuropsychology. One source (Rehabilitation Goal Setting: Theory, Practice and Evidence) summarised:
...neuroscience has traditionally not been concerned with goal pursuit per se but rather with the cognitive component or sub-components that contribute to it. [...] whereas so
How do you solve interpersonal problems when neither sides can see themselves as the one in fault?
I've had a a fight with my sister regarding my birthday present. She bought me - boosted with a contribution of my mom and dad - a bunch of clothes. I naturally got mad because:
It has caused a little bit of bitterness. I understand her point of view, which was to make me happy o...
We are getting closer to the future in which you WILL be able to stab people in the face over the internet.
I am starting to look at the health insurance market.
This is a human-level search: where do I find the basic considerations to evaluate everything else with? Do you know of a good resource?
https://www.facebook.com/groups/144017955332/
The Facebook group has changed names. If you are looking for it, it goes by "Brain Debugging Discussion". The link is the same.
WinSplit Revolution, which lets you set locations and sizes in pixels for various window options, worked beautifully for splitting my wide monitor into thirds, but did not survive the transition to Windows 10. I can find countless window managers that let me snap to the left half or the right half, or if they're particularly fancy into quarters. But I have yet to find a tool with keyboard hotkeys that will divide the monitor space into thirds, or let me set custom locations so I can do it myself.
What am I missing?
Ok, I have to hold my breath as I ask this, and I'm really not trying to poke any bears, but I trust this community's ability to answer objectively more than other places I can ask, including more than my weak weak Google fu, given all the noise:
Is Sanders actually more than let's say 25% likely to get the nod?
I had written him off early, but I don't get to vote in that primary so I only just started paying attention. I'm probably voting Libertarian anyway, but Trump scares me almost as much as Clinton, so I'd sleep a little better during the meanwhile if...
'...robust increases in craving and exhibit modest changes in autonomic responses, such as increases in heart rate and skin conductance and decreases in skin temperature, when exposed to drug-related versus neutral stimuli....However, when drug-use measures are used in cue reactivity studies the typical finding is a modest increase in drug-seeking or drug-use behavior'
-WP: Cue reactivity
I am not sure if I read it here or on SSC, but someone tried to estimate how a "mary's room" equivalent for the human brain would look like. A moon sized library on which robotic crawlers run around at decent fractions of c ...
Anybody having info on that?
(epistemic status: Ruminations on cognitive processes by non-expert.)
I have a question tangential to AI safety about goal formation. How do goals form in systems that do no explicitly have goals to begin with?
I tried to google this and didn't find answers neither for AI systems nor for neuropsychology. One source (Rehabilitation Goal Setting: Theory, Practice and Evidence) summarised:
neuroscience has traditionally not been concerned with goal pursuit per se but rather with the cognitive component or sub-components that contribute to it. [...] whereas social psychology has tended to study more abstract life goals.
Apparently many AI safety problems revolve around the wrong goals or the extreme satisfaction of goals. The usually implied or explicit definition of a goal seems to be the minimum difference to a target state (which might be infinity for some valuation functions). Many AI models include some notion of the goal in some coded or explicitly given form. In general that coding isn't the 'real' goal. By real goal I mean that which the AI system it total appears to optimize for as a whole. And that may differ from the specification due to the structure of the available input and output channels and the strength of the optimization process. Nonetheless there is some goal and there is a conceptual relation between the coded and the real goal.
But maybe real things can be a bit more complicated. Consider human goal formation. Apparently we do have goals. And we kind of optimize for them. But the question arises: Where do they come from cognitively and neurologically?
Goals are very high level concepts. I think there is no high level specification of the goals somewhere inside us that we read off and optimize for. I think our goals are our own understanding - on that high level of abstraction - of those patterns behind our behavior.
If that is right and goals are just our own understanding of some patterns of behavior, then how comes there is are specific brain modules (prefrontal cortex) devoted to planning for it? Or rather how come these brain parts are actually connected to the abstract concept of a goal? Or aren't they? And the planning doesn't act on our understand of the goals but on the constituent parts. What are these?
In my children I see clearly goal-directed behavior long before they can articulate the concept. And there are clear intermediate steps where they desperately try to optimize for very isolated goals. For example winning a race to the door. Trying to climb a fence. Being the first one to get a treat. Winning a game. Loosing apparently causes real suffering. But why? Where is the loss? How are any of these things even matched against a loss. How does they brain match whatever representation of reality to these emotions? How do the encodings of concepts for me and you and our race get connected to our feelings about this situation? And I kind of assume here that the emotions themselves somehow produce the valuation that controls our motivation.
I took issue with not knowing how humans formed goals. so I made this list of common human goals and suggested humans who do not know should look at the list of common goals and pick ones that are relevant to themselves.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.