There is also scope for helping people think through a thing in a way that they would endorse, e.g. by asking a sequence of questions.
As aptly demonstrated:
I don't think this is a good illustration of point 6. The video shows a string of manipulative leading questions, falling short of the " in a way that they would endorse" criteria.
When people understand that a string of questions is designed to strong arm them into a given position they rarely endorse it. It seems to me that point 6 is more about benevolent and honest uses of leading questions.
Admittedly, I am making the assumption that " in a way that they would endorse" means "such that if people understood the intent that went into writing the string of questions in that way they would approve of the process".
I feel 4. can be explained by humans not having probability distributions on future events but something more like infradistrbutions/imprecise distributions. This a symptom of larger problem of Bayesian dogmatism that has taken hold of some parts of LW/rationalists.
Let me xplain how this works:
To recall: an imprecise distribution is the convex (closed) hull of a collection of probability distributions . In other words it combines 'Knightian' uncertainty with probabilistic uncertainty.
If you ask people for 10%,50%,90% chance of AI happening you are implicitly asking for the worst case: i.e. there in at least one probability distributions such that AGI) =10%,50%,90%
On the other hand when you ask for a certain event to happen for certain in 10,20,50 years you are asking for the dual 'best case' scenario, i.e. for ALL probability distributions what probability (AGI in 10y), (AGI in 20y), (AGI in 50y) is and taking the minimum.
This does seem to be a useful insight, though I don't think it's anywhere near so precise as that.
Personally, the Knightian uncertainty completely dominates my timeline estimates. If someone asks for which year the cumulative probability reaches some threshold, then firstly that sounds like a confusion of terms, and secondly I have or can generate (as described) a whole bunch of probability distributions without anything usable as weightings attached for each. Any answer I give is going to be pointless and subject to the whims of whatever arbitrary weightings I assign in the moment, which is likely to be influenced by the precise wording of the question and probably what I ate for breakfast.
It's not going to be the worst case - that's something like "I am already a simulation within a superintelligent AGI and any fact of the matter about when it happened is completely meaningless due to not occurring in my subjective universe at all". It's not going to be the best case either - that's something like "AGI is not something that humans can create, for reasons we don't yet know". Note that both of these are based on uncertainties: hypotheses that cannot be assigned any useful probability since there is no precedent nor any current evidence for or against them.
It's going to be something in the interior, but where exactly in the interior will be arbitrary, and asking the question a different way will likely shift where.
Curated. I'm interested in this both from the perspective of personal epistemics and group epistemics. Surveys are a tool for figuring out things about the world, and they tend to also be a way to get-on-the-same-page about how the world looks.
Thanks Katja for sharing a bunch of lived experience on how execute surveys well. :)
Some of these are strikingly similar to advice for how to interview users when designing user friendly software.
I guess it makes sense that there's some cross over.
I like it!
This is not my research area but this list looks really relevant. Thanks for posting it!
For those that do not know. Survey methods and survey analysis is a field of academic research in itself. There are people who specialise in this topic - and hence we can learn from them or pay them to consult on the design of our surveys.
E.g. SMAG (survey methods and analysis group) at the university of Manchester. And NCRM (national centre for research methods) are two I know if in the UK.
There is a "journal of survey statistics and methodology" and "of social research methodology".
And undergraduate textbooks such as
Good post.
Interest in surveys doesn’t seem very related to whether a survey is a good source of information on the topic surveyed on. One of the strongest findings of the 2016 survey IMO was that surveys like that are unlikely to be a reliable guide to the future.
Can you say more?
Second sentence:
First sentence:
Point 10 should be 1 and probably a variant should be 2. And they're not even all that good at finding out what people think. They can sometimes find out how people feel or what their current reaction is.
To point 14, it depends on who's doing the rating. I'll point out that survey design and interpretation is a pretty big business - there's a reason Qualtrics charges so much (and that SAP paid $8B for the company), and a related reason that competitors universally suck - the actual presentation is the easy (and non-profitable) part. The design and analytics are difficult and command a lot of revenue.
Things I believe about making surveys, after making some surveys: