Woof!
Fully agree on the bias part, although specialists being incompetent isn't a thread in my article? There's an entire aside about why some research doesn't get done, and incompetence isn't among the reasons.
I've read the Slate article you linked, and I think it's good. I don't see anything in there that I disagree with. The article is from 2019 when the amount of evidence (and importantly the number of people who successfully replicated it) was just one Instagram dog. Even back then in the article scientists are cautious but lukewarm and want a more rigorous study. Now we have a more rigorous study running.
All this stuff has been addressed in the comments and in the updated article. I'm quite adamant that misinterpreting dog output is the primary danger and I don't claim confidence in specific abilities, precisely because we need more study to determine what's real and what's confirmation bias/misinterpretation.
That wasn't a point about dog research, it was a point about dynamics of what kind of discoveries and research gets made more often.
"in the 60s" for social/cognitive/psychology-adjacent research has to be a bit like "in mice" for medicine. Either way, people try to do something and fail, 60 years later someone comes up with an approach that works. That's a completely normal story.
I thought about taking you up on the bet at 3:1 but I don't like the "vast majority" part. I think it's too much work to specify the rules precisely enough and I've spent enough time on this already.
word order is effectively random, length of sentence does not correlate with information content
That seems to be the case with dogs, and it won't surprise me if they never progress much further than that.
I've updated the article to include a more in-depth explanation of the study design and philosophy instead of just two links (I suspect almost nobody clicked them). Also added responses to common criticism and titles and short explanations for video links (I suspect a lot of people didn't click on most of the videos). Also removed the revolution part.
If you've already read the article, I suggest you read the research and criticism parts under Bunny and watch the new Stella video I added, which is more representative of the kinds of videos that led me to watch the dog space more closely. All of the good Stella stuff is on Instagram, but not on YouTube.
I think we're mostly in agreement, and I'm not disputing that it pays to be careful when it comes to animal cognition. I'd say again that I think it's a meta-rational skill to see the patterns of what is likely to work and what isn't, and this kind of stuff is near-impossible to communicate well.
I've read about the car-nutracker thing somewhere, but without the null result from research. If you had asked me to bet I'd say it would be unlikely to work. But it's illustrative that we both still agree that corvids are smart and there's a ton of evidence for it. We just don't know the exact ways and forms, and that's how I feel about the dog thing. There's something there but we need to actually study it to know the exact shape and form.
I predict FluentPet is at best going to become a niche hobby down the road, with less than 1% of dog owners having trained their pet in 10 years.
I don't think it will be niche because it's already not niche, considering the massive viewership. But your 1% figure sounds about right as a higher bound, given the sheer number of dog owners, the amount of work required and people's low desire to train their pets. A cursory google search says 4% of US dog owners take a training class, so serious button use will have to be a fraction of that.
I think you hit the nail on the head here. When I was writing the article I definitely had someone with a high prior in mind, to the point where I expected people to say "so what, why wouldn't dogs do that if you trained them".
Sometimes people seem to put dogs closer to reflexive automatons like insects than to fellow mammals. My prior is the base affects that we feel aren't fundamentally different between us and dogs (and most higher mammals). I'm talking about stuff like fear, excitement, generalized negative or positive affect, tiredness, sexual arousal. Even something like craving a specific food, I don't see why it should be unique to us, given that dogs are often picky eaters and have favorite foods.
People with strong priors against dog intelligence seem to ascribe everything to anthropomorphism, and there's often an undertone of "these people are too soft and weak, they call themselves ridiculous things like 'dog parents', they'll project human baby characteristics onto a Furby if you gave them the chance". FWIW I don't have a dog and don't plan to, and in my experience most dogs are fairly dumb. But to me they're clearly a bit more than simple automatons blindly reacting to stimuli.
Plenty of concern was raised in the comments, have you gone through all of them and all the replies?
I'm aware of comparative cognition, the people posting the pet videos are participating in ongoing research at the Comparative Cognition Lab at the University of California, San Diego. They give a description of their methodology, but the status updates appear hidden to ensure integrity of the data.
Short recap of the comments: This is a very new thing, early-stage science often looks like messing around, so don't expect lots of rigor so early. If they had a paper, I would post that. On the balance of evidence, the videos seem to be made in good faith, I don't think it's some staged viral crap. Don't discount evidence just because it's normie YouTube vids. The main claim is that there's something interesting going on that makes me suspect dogs could possibly produce something that looks like language. I'm not claiming certainty on that or on the level of dogs' supposed language ability, it's research in progress, but I think it's exciting and worth studying.
I don't think it's fair to say my dismissal of concerns is "cursory" if you include my comments under the post. Maybe the article itself didn't go deep enough, partly I wanted it to scan well, partly I wanted to see good criticism so I could update/come up with good responses, because it's not easy to preempt every criticism.
As for cursory evidence, yes it's mostly that, but cursory evidence can still be good Bayesian evidence. I think there's enough to conclude there's something interesting going on.
For starters, all of this hinges on videos being done in good faith. If it's all creative editing of pets' random walks (heh) over the board, then of course we should dismiss everything out of the gate.
On the balance of evidence, Alexis doesn't look like someone who's trying very hard to convince you of her magic talking dog to sell you $250 online dog communication courses. And don't say "Amazon affiliate links", even Scott has done that.
But a bigger part of why I updated towards "there's something there" is that there are several people who recreated this. Of course it's possible that every one of them is also fake, but that would be a bigger reach. Or it could be that it's easy to delude yourself and overinterpret pet output, but then the videos are still in good faith and that's what we're determining here.
I'll just link to a few comments of mine on that:
Simple button use is expected by induction, danger of over-interpreting
It can't be classic Clever Hans if owner doesn't know the right answer
It kind of does, but that wasn't the model. What I had in the back of my mind is "if Eliezer gets to do it, then I get to do it too". I think the community simply likes boldly stated (and especially contrarian) claims, as long as it doesn't go too far off-balance.
I didn't consciously go for any "maneuvers" to misrepresent things. IMO the only actually iffy part is the revolution line (steelman: even if your pet can tell you what they actually want to do instead of your having to guess, that's a revolution in communication).
And I think I hedged my claims pretty well. This stuff is highly suggestive, my position is "hey, despite the trappings of looking like fake viral videos, you should look at this because it's more interesting than it looks at first glance". I expect that we'll learn something interesting, but I don't have any certainty about how much. Maybe after rigorous analysis we'll see that dogs do only rudimentary communication and the rest is confirmation bias. Maybe we'll learn something more surprising.
To me, this doesn't feel too dissimilar from something my cousin-who-is-into-pyramid-schemes would send me. This article in particular feels not too dissimilar from something I could imagine on e.g. Buzzfeed; it just says some big things with very little substantive evidence and some maneuvers that seem most commonly used to weakly mask the lack of credibility of the argument.
I expected more complaints of this kind, so I was pleasantly surprised. I can easily imagine structurally similar arguments from someone who thinks AI alignment or cryonics are weird "nerd woo". If we're to be good rationalist we have to recognize that most evidence isn't neatly packaged for us in papers (or gwern articles) with hard numbers and rigorous analysis. We can't just exclude the messy parts of the world and expect to arrive at a useful worldview. Sometimes interesting things happen on Instagram and Tiktok.
To be honest, a few of the reasons you decide the evidence is "not compelling" are pretty weird. Why does it matter if the dog uses one paw or both paws? Why is it weird that a dog has a "bed time"? What is "seeming disinterest" from the dog and what makes you think you can see that? Why do you expect dogs to strive for brevity and being "more clear"?
I like the idea of interacting with the dog normally as you would a 2-year-old human, while having the cameras running 24/7 so less biased people can look over the data.
Yeah it's an important point that some phenomena (perhaps most phenomena) are impractical to recreate under a strict research protocol. If you tried to teach your dog with a very formal approach, you'd probably "lose the magic" that makes it happen. Kaj Sotala posted a comment that suggests that "incorrect" overinterpretation of babies' behavior is actually an important mechanism by which the learning happens! It's a slow, messy, iterative process to get that "meeting of minds".
I also really like the research setup, and I'm glad they're sourcing from several pet households. Most of the attention is on Bunny, because she's the most coherent and is actively posted on YouTube, but I believe there are quite a few more people participating, they just don't post it publicly.
And even though the learning process isn't under strict protocol, you can still design more rigorous experiments. AFAIK the mirror near Bunny's buttons was placed at the suggestion of the researchers specifically to see if she'll recognize herself in the mirror.
I've noticed that the more high-level and complex the work you're doing, the sillier your bugs get. Perhaps because you focus so much on the complex parts since they're difficult to get right, so you gloss over the more basic parts.
I don't think your pyramid is a good conceptual framework to understand programming expertise. Expertise comes mostly from seeing common/overarching patterns (which would be all over the place on your pyramid) and from understanding the entire stack - having at least some sense of how each level functions, from the high-level abstraction of RoR's ORM to object lifetime and memory concerns to how database queries happen to how the db executes it (e.g. db indexes are also relevant to your example), down to at least having played around a little with assembly language.
I don't even know Ruby or RoR, but if I had to use it for your example, my first thought would be "ok, how do I do a WHERE query in their ORM", because every db abstraction in every language and framework has to solve this problem and I've seen a lot of those solutions. And I'll know to consider eager vs lazy evaluation (what if a campaign has 1M records after filtering, maybe I want to iterate over results instead of getting a plain list), and whether campaign_id has an index, because all of those are very common concerns that crop up.
So the expertise isn't knowing a factoid "don't use x.all.filter() in RoR", it's knowing that anything that queries a database has to deal with the concerns above somehow.