Below are a list of interesting content I came across in the first quarter of 2020. I subdivided the links into the type of medium they use, namely auditory, textual, and visual. The links are in no particular order.

1. Auditory

D. McRaney. How a Divisive Photograph of a Perceptually Ambiguous Dress Led Two Researchers to Build the Nuclear Bomb of Cognitive Science out of Socks and Crocs. You Are Not So Smart. 2020:

...the science behind The Dress, why some people see it as black and blue, and others see it as white and gold. But it’s also about how the scientific investigation of The Dress lead to the scientific investigation of socks and Crocs, and how the scientific investigation of socks and Crocs may be, as one researcher told me, the nuclear bomb of cognitive neuroscience.

When facing a novel and uncertain situation, the brain secretly disambiguates the ambiguous without letting you know it was ever uncertain in the first place, leading people who disambiguate differently to seem iNsAnE.

C. Connor. Psychoacoustics. YouTube. 2020:

00:00 Psychoacoustics is the study of the perception of sound. These videos attempt to gather all of the various interesting phenomena that fall in to this category in one condensed series, including many neat illusions. We will also cover a few fascinating geeky topics relating to hearing.

MIT, 15.ai, fifteen.ai, 2020:

This is a text-to-speech tool that you can use to generate 44.1 kHz voices of various characters. The voices are generated in real time using multiple audio synthesis algorithms and customized deep neural networks trained on very little available data (between 30 15 and 120 minutes of clean dialogue for each character). This project demonstrates a significant reduction in the amount of audio required to realistically clone voices while retaining their affective prosodies.

2. Textual

AI Impacts. Interviews on Plausibility of AI Safety by Default. AI Impacts Blog. 2020:

AI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions.

Before taking into account other researchers’ opinions, Shah guesses an extremely rough~90% chance that even without any additional intervention from current longtermists, advanced AI systems will not cause human extinction by adversarially optimizing against humans.

Christiano is more optimistic about the likely social consequences of advanced AI than some others in AI safety, in particular researchers at the Machine Intelligence Research Institute (MIRI)

Gleave thinks there’s a ~10% chance that AI safety is very hard in the way that MIRI would argue, a ~20-30% chance that AI safety will almost certainly be solved by default, and a remaining ~60-70% chance that what we’re working on actually has some impact.

Hanson thinks that now is the wrong time to put a lot of effort into addressing AI risk.

DeepMind. Outperforming the Human Atari Benchmark. DeepMind Blog. 2020:

The Atari57 suite of games is a long-standing benchmark to gauge agent performance across a wide range of tasks. We’ve developed Agent57, the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games. Agent57 combines an algorithm for efficient exploration with a meta-controller that adapts the exploration and long vs. short-term behaviour of the agent.

Agent57 is built on the following observation: what if an agent can learn when it’s better to exploit, and when it’s better to explore? We introduced the notion of a meta-controller that adapts the exploration-exploitation trade-off, as well as a time horizon that can be adjusted for games requiring longer temporal credit assignment. With this change, Agent57 is able to get the best of both worlds: above human-level performance on both easy games and hard games.

T. Weiss et al. Perceptual Convergence of Multi-Component Mixtures in Olfaction Implies an Olfactory White. PNAS. 2012:

In vision, two mixtures, each containing an independent set of many different wavelengths, may produce a common color percept termed “white.” In audition, two mixtures, each containing an independent set of many different frequencies, may produce a common perceptual hum termed “white noise.” Visual and auditory whites emerge upon two conditions: when the mixture components span stimulus space, and when they are of equal intensity.

We conclude that a common olfactory percept, “olfactory white,” is associated with mixtures of ∼30 or more equal-intensity components that span stimulus space, implying that olfactory representations are of features of molecules rather than of molecular identity.

3. Visual

3Blue1Brown. Simulating an Epidemic. YouTube. 2020:

01:12 These simulations represent what’s called an “SIR model”, meaning the population is broken up into three categories, those who are susceptible to the given disease, those who are infectious, and those who have recovered from the infection.

04:30 The first key takeaway to tuck away in your mind is just how sensitive this growth is to each parameter in our control. It’s not hard to imagine changing your daily habits in ways that multiply the number of people you interact with or that cut your probability of catching an infection in half.

09:00 A second key takeaway here is that changes in how many people slip through the tests cause disproportionately large changes to the total number of people infected.

21:22 After making all these, what I came away with more than anything was a deeper appreciation for disease control done right; for the inordinate value of early widespread testing and the ability to isolate cases; for the therapeutics that treat these cases, and most importantly for how easy it is to underestimate all that value when times are good.

3Blue1Brown. Bayes Theorem, and Making Probability Intuitive. YouTube. 2019:

00:00 The goal is for you to come away from this video understanding one of the most important formulas in all of probability, Bayes’ theorem. This formula is central to scientific discovery, it’s a core tool in machine learning and AI, and it’s even been used for treasure hunting, when in the 80’s a small team led by Tommy Thompson used Bayesian search tactics to help uncover a ship that had sunk a century and half earlier carrying what, in today’s terms, amounts to $700,000,000 worth of gold. So it's a formula worth understanding.

08:44 This is sort of the distilled version of thinking with a representative sample where we think with areas instead of counts, which is more flexible and easier to sketch on the fly. Rather than bringing to mind some specific number of examples, think of the space of all possibilities as a 1x1 square. Any event occupies some subset of this space, and the probability of that event can be thought about as the area of that subset.

Tania Lombrozo. Learning By Thinking. Edge. 2017:

Sometimes you think you understand something, and when you try to explain it to somebody else, you realize that maybe you gained some new insight that you didn't have before. Maybe you realize you didn't understand it as well as you thought you did. What I think is interesting about this process is that it’s a process of learning by thinking. When you're explaining to yourself or to somebody else without them providing feedback, insofar as you gain new insight or understanding, it isn't driven by that new information that they've provided. In some way, you've rearranged what was already in your head in order to get new insight.

Sometimes what we want to do is be persuasive. Sometimes what we want to do is come up with a convenient way for solving a particular type of problem. Again, it might be wrong some of the time, but it's going to be much easier to implement in other cases. There are all sorts of different epistemic and social goals that we might have. Increasingly, I'm thinking that maybe explanation doesn't have just one goal; it probably has multiple goals. Whatever it is, it's probably not just the thing that Bayesian inference tracks. It's probably tracking some of these other things.

Veritasium. Parallel Worlds Probably Exist. YouTube. 2020:

00:56 so how are we to reconcile the spread-out wavefunction evolving smoothly under the Schrodinger equation with this point like particle detection 02:23 the outcomes of experiments so the way quantum mechanics came to be understood and the way I learned it is that there are two sets of rules when you're not looking the wave function simply evolves according to the Schrodinger equation but when you are looking when you make a measurement the wavefunction collapses suddenly and irreversibly and the probability of measuring any particular outcome is given by the amplitude of the wave function associated with that outcome squared now shrodinger himself hated this formulation

04:54 in this video I want to show that there is a better way to think about Schrodinger's cat in fact a better way to think about quantum mechanics entirely that I'd argue is more logical and consistent to get there we have to examine the three essential components of Schrodinger's cat superposition entanglement and measurement to see if any of them is flawed

11:58 the implication is that the founders of quantum theory may have got it exactly backwards the wavefunction is the complete picture of reality and our measurement is just a tiny fraction of it

How To Make Everything. Creating My Own Alphabet From Scratch. YouTube. 2020:

03:03 well writing in essence can be described as a system in which the attempt is made to put language in some sort of written form

03:59 developed soon after that was writing that can convey words the logogram of writing that can also convey syllables we call those syllabogram and then these early writing systems both in Mesopotamia Egypt had a third category of signs and those were determinatives.

SmarterEveryDay. How Rockets Are Made. YouTube. 2020:

02:51 this is Vulcan, and this rocket has never flown. Never flown, not yet. And you're going to see, the first flight vehicle hardware in the factory being fabricated when we go in there today.

51:54 Yes, our specialty are the higher-energy, more difficult orbits, things like Mars 2020, an interplanetary mission. -Right, and that's... Literally right there, yeah, but we don't call it Mars 2020, here. -What do you call it? We call it Mars 2020...20, -Why would you do that? Because it's our 20th trip to Mars.

VegSource. When Supplements Harm. YouTube. 2020:

Today we look at research showing the potential problems of raising your B12 blood levels too high, which could include death.

If you adhere to a vegan diet, B12 supplementation is prudent. I also recommend having yourself tested for vitamin B12 deficiency every few years. The most appropriate test for evaluating B12 status is the urine test for methylmalonic acid (MMA). Elevated MMA is currently the best tool for detecting vitamin B12 deficiency, and is considered to be superior to testing for serum B12 directly. An alternative and less costly screening blood test is Homocysteine.

TwoMinutePapers. This Neural Network Turns Videos Into 60 FPS. YouTube. 2020:

00:12 it almost always happens that I encounter the paper videos that have anything from 24 to 30 frames per second. In this case, I put them in my video editor that has a 60 fps timeline, so half or even more of these frames will not provide any new information. As we try to slow down the videos for some nice slow-motion action, this ratio is even worse, creating an extremely choppy output video because we have huge gaps between these frames.

2:18 The design of this neural network tries to produce four different kinds of data to fill in these images [...] optical flows [...] depth map [...] contextual extraction [...] interpolation kernels...

03:18 All it needs is just the two neighboring images.

New Comment
4 comments, sorted by Click to highlight new comments since:

Thanks for the links and I hope you post another next quarter!

No problem, let me know which ones you find the most interesting. I'll try to improve the quality per link over time.

(between 30 15 and 120 minutes

The 30 was crossed out in the original quote. (between 30 15 and 120 minutes of clean dialogue for each character). I guess quoting it didn't take the formatting with it.