1 min read

3

This is a special post for quick takes by philip_b. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
24 comments, sorted by Click to highlight new comments since:

Any guesses at the difficulty? My first impression was that this is not going to be solved any time soon. I just don't think current techniques are good enough to write flawless lean code given a difficult objective.

I think grand challenges in AI are are sometimes useful, but when they are at this level I am a bit pessimistic. I don't think this is necessarily AI-complete, but it's perhaps close.

[-][anonymous]40

By soon I mean 5 years. Interestingly, I have a slightly higher probability that it will be solved within 20 years, which highlights the difficulty of saying ambiguous things like "soon."

[-][anonymous]30

A few examples would help - the academic papers I see often call out this problem, and suggest possible Zs themselves. Generally, X and Y are more easily or precisely measured than the likely Zs, so make for better publications.

I definitely see the problem in popular articles and policy justification.

Please recommend me cheap (in brain-time and text length) ways to represent different levels of confidence in English texts and speech. More concretely, I want English words and ways to use them in sentences without modifying the sentence's structure. Examples:

  • This is a reason to think that SGD-like algorithms can't learn it. -> This is a reason to think that maybe SGD-like algorithms can't learn it. // sounds kinda bad, I wouldn't write it like that for people who are picky about style; also, what other words work here instead of maybe?
  • because the inputs and the parameters are noisy -> because likely the inputs and the parameters are noisy // seems unclear

Non-examples:

  • SGD-like algorithms can't learn this circuit because the inputs and the parameters are noisy during training. -> It's plausible that SGD-like algorithms can't learn this circuit, because it's plausible that the inputs and the parameters are noisy during training. // the structure of the rest of the sentence had to be modified, also I had to spend brain-time to think about how to formulate this. also it's very long

Not sure this is what you ask for but it seems relevant: What probability do people attach to likelihood adjectives?: https://hbr.org/2018/07/if-you-say-something-is-likely-how-likely-do-people-think-it-is 

Being explicit about levels of confidence inherently needs brain-time because you have to think about what your level of confidence is.

I am looking for a gears-level introductory course (or a textbook, or anything) in cooking. I want to cook tasty healthy food in an efficient way. I am already often able to cook tasty food, but other times I fail, and often I don't understand what went wrong and how cooking even works.

I've heard good reviews of "Salt, Fat, Acid, Heat" as the definitive "gears level cooking" book, but have never read it myself.

Tim Ferriss book The 4-hour Chef does a good job at explaining the how and why and is efficiency oriented. 

Recently, I however moved to the "throw things in the instant pot and let it do the cooking method of solving cooking and find it to be often more efficient then the normal way of cooking. 

When I had a quick go-ogle search I started with:

"melatonin stability temperature"

then

"N-Acetyl-5-methoxytryptamine"

A quick flick through a few abstracts I can't see anything involving temperatures higher than 37 C i.e. body temperature.

Melatonin is a protein, many proteins denature at temperatures above 41 C.

My (jumped to) conclusion:

No specific data found.

Melatonin may not be stable at high temperatures, so avoid putting it in hot tea.

Curated and popular this week