All of Jay's Comments + Replies

Normal humans have a fairly limited set of true desires, the sort of things we see on Maslow's hierarchy of needs.  Food, safety, sex, belonging, esteem, etc.  If you've become so committed to your moral goals that they override your innate desires, you are (for lack of a better word) a saint.  But for most people, morality is a proxy goal that we pursue as a strategy to reach our true goals.  Most people act a culturally specified version of morality to gain esteem and all that goes with it (jobs, mates, friends, etc).

Your true desires... (read more)

The ancients considered everything to be the work of spirits.  The medievals considered the cosmos to be a kingdom.  Early moderns likened the universe to a machine.  Every age has its dominant metaphors.  All of them are oversimplifications of a more complex truth.

Suppose you had an identical twin with identical genes and, until very recently, an identical history.  From the perspective of anyone else, you're similar enough to be interchangeable with each other.  But from your perspective, the twin would be a different person.  

The brain is you, full stop.  It isn't running a computer program; its hardware and software are inseparable and developed together over the course of your life.  In other words, the hardware/software distinction doesn't apply to brains.

To call something an "uploaded brain" is to make two claims.  First, that it is a (stable) mind.  Second, that it is in some important sense equivalent to a particular meat brain (e.g., that its output is the same as the meat brain, or that its experiences are the same as the meat brain's).  The sorts of methods you're talking about to stabilize the mind help with the first claim, but not with the second.

I've always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism.  As a materialist... (read more)

2Noosphere89
I think the crux of it is here: I think that basically everything in the universe can be considered a program/computation, but I also think the notion of a program/computation is quite trivial. More substantively, I think it might be possible to replicate at least some parts of the physical world with future computers that have what is called physical universality, where they can manipulate the physical world essentially arbitrarily. So I don't view brains and computer programs as being of 2 different types, but rather as the same type as a program/computation. See below for some intuition as to why. http://www.amirrorclear.net/academic/ideas/simulation/index.html
0quetzal_rainbow
There are no properties of brain which define that brain is "you", except for the program that it runs.

What if there would never be someone I trusted who could tell me I was Good Enough, that things were in some sense Okay?

The internalized feeling that you're not okay is a huge part of what motivates you to become better.  If you lost it, you would be much more likely to become complacent and stagnate.  Both inner peace and relentless drive are profoundly valuable, but they are mutually exclusive.

2Raemon
I agree this is how a lot of people execute "become stronger" this way. It's not too surprising if it turns out to be an essential part of the process (at least when implemented in humans). But, at the very least there seem like more and less healthy ways of doing it, and I personally think I see the outlines of how I could operate fairly differently and still have my drive.

I certainly agree that brains are complicated.

I think part of the difference is that I'm considering the uploading process; it seems to me that you're skipping past it, which amounts to assuming it works perfectly.

Consider the upload of Bob the volunteer.  The idea that software = Bob is based on the idea that Bob's connectome of roughly 100 trillion synapses is accurately captured by the upload process.  It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions).  It wil... (read more)

Surely both (1) and (2) are true, each to a certain extent.

Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?

It depends.  We know from experience how meat brains change over time.  We have no idea how software brains change over time; it surely depends on the details of the technology used.  The changes might be comparable, but they might be bizarre.  The longer you run the program, the more extreme the changes are likely to be.

I can't rule it out either.  Nor can I rule it in.  It's conceivable, but there are enough issues that I'm highly skeptical.  

2Ben
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic. If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where "sufficiently advanced" here means "probably always science fiction"). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone's future behaviour based on a simulation of their brain impossible. (Omega from Newcomb's paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn't seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn't have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.

Let's try again.  Chaotic systems usually don't do exactly what you want them to, and they almost never do the right thing 1000 times in a row.  If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way).  This saves you the trouble of actually building a system that won't work reliably.  Basically, it marks off certain areas of solution space as not viable.

Also, there's Lavarand.  It turns out that lava lamps are chaotic.

2Alex_Altair
For what it's worth, I think you're getting downvoted in part because what you write seems to indicate that you didn't read the post.

That wasn't well phrased.  Oops.

Any physical system has a finite amount of mass and energy that limit its possible behaviors.  If you take the log of (one variable of) the system, its full range of behaviors will use fewer numbers, but that's all that will happen.  For example, the wind is usually between 0.001 m/s (quite still) and 100 m/s (unprecedented hurricane).  If you take the base-10 log, it's usually between -3 and 2.  A change of 2 can mean a change from .001 to .1 m/s (quite still to barely noticeable breeze) or a change from 1 m/s to 100 m/s (modest breeze... (read more)

That's the point.  Nobody thought such tiny variations would matter.  The fact that they can matter, a lot, was the discovery that led to chaos theory.

Consider - A typical human brain has ~100 trillion synapses.  Any attempt to map it would have some error rate.  Is it still "you" if the error rate is .1%?  1%? 10%?  Do positive vs. negative errors make a difference (i.e. missing connections vs. spurious connections)?  

Is this a way to get new and exciting psychiatric disorders?

I don't know the answers, or even how we'd try to figure out the answers, but I don't want to spend eternity as this guy.  

4tailcalled
Trial and error? E.g. first you upload animal subjects and see what's fidelity seems to preserve all the animal traits you can find. At some point you then start with human volunteers (perhaps preferentially dying people?), and see whether the rates that seem to work for nonhuman animals also work for humans. Also I guess once you have a mostly-working human upload, you can test perturbations to this upload to see what factors they are most sensitive to.
2Jay
That wasn't well phrased.  Oops.

an exponential decrease in measurement error will only buy you a linear increase in how long that simulation is good for.

True, and in the real world attempts to measure with extreme precision eventually hit limits imposed by quantum mechanics.  Quantum systems are unpredictable in a way that has nothing to do with chaos theory, but that cashes out to injecting tiny amounts of randomness in basically every physical system.  In a chaotic system those tiny perturbations would eventually have macroscopic effects, even in the absence of any other sources of error.

1Hudjefa
I don't know the exact values Lorenz used in his weather simulation, but Wikipedia says "so a value like 0.506127 printed as 0.506". If this were atmospheric pressure, we're talking about a millionth decimal place precision. I don't know what exerts 0.000001 Pa of pressure or to what such a teeny pressure matters.

The seminal result for chaos theory came from weather modeling.  An atmospheric model was migrated to a more powerful computer, but it didn't give the same results as it had on the old computer.  It turned out that, in the process of migration, the initial condition data had been rounded to the eighth decimal place.  The tiny errors compounded into larger errors, and over the course of an in-model month the predictions completely diverged.  An error in the eighth decimal place is roughly comparable to the flap of a butterfly's wing, whi... (read more)

2Noosphere89
The claim that uploaded brains don't work because of chaos turns out not to work so well, because it's usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions: See more below: https://www.lesswrong.com/posts/epgCXiv3Yy3qgcsys/you-can-t-predict-a-game-of-pinball#wjLFhiWWacByqyu6a I also like tailcalled's comment on the situation, too.
7Ben
My position is that either (1) my brain is computationally stable, in the sense that what I think, how I think it and what I decide to do after thinking is fundamentally about my algorithm (personality/mind), and that tiny changes in the conditions (a random thermal fluctuation), are usually not important. Alternatively (2) my brain is not a reliable/robust machine, and my behaviour is very sensitive to the random thermal fluctuations of atoms in my brain. In the first case, we wouldn't expect small errors (for some value of small) in the uploaded brain to result in significant divergence from the real person (stability). In the second case I am left wondering why I would particularly care. Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around? So, I don't think uploaded brains can be ruled out a priori on precision grounds. There exists a non-infinite amount of precision that suffices, the necessary precision is upper bounded by the thermal randomness in a body temperature brain.
9tailcalled
This is true, but also e.g. minor environmental perturbations like seeing something at a slightly different time would also cause one to diverge from what one otherwise would have been in a fairly short time, so it seems like any notion of personal identity just has to be robust to exponential divergence.
2Elizabeth
  This seems like a very underpowered sentence that doesn't actually need chaos theory. How do you know you're in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long? [For everyone else: Hastings is addressing these questions more directly. But I'm still interesting in what Jay or anyone else has to say]. 
5philip_b
Well, now I'm wondering - is neural network training chaotic?

I don't want to live in a world where there's only the final survivors of selection processes who shrug indifferently when asked why we don't revive all the beings who were killed in the process which created the final survivors.

If you could revive all the victims of the selection process that brought us to the current state, all the crusaders and monarchists and vikings and Maoists and so, so many illiterate peasant farmers (on much too little land because you've got hundreds of generations of them at once, mostly with ideas that make Putin look like Soni... (read more)

5quila
Some clarifications: * I'm thinking of this in the context of a post-singularity future, where we wouldn't need to worry about things like conflict or selection processes. * By 'the ones who were killed in the process', I was thinking about e.g herbivorous animals that were killed by predator species[1], but you're correct that it could include humans too. A lot of humans have been unjustly killed (by others or by nature) throughout history. * I think my endorsed morals are indifferent about the (dis)value of reviving abusive minds from the past, though moral-patient-me dislikes the idea on an intuitive level, and wishes for a better narrative ending than that. (Also I upvoted your comment from negative) I also notice some implied hard moral questions (What of current mean-hearted people? What about the potential for past ones of them to have changed into good people? etc) 1. ^ As a clear example of a kind of being who seems innocent of wrongdoing. Not ruling out other cases, e.g plausibly inside the mind of the cat that I once witnessed killing a bunny, there could be total naivety about what was even being done. Sort-of relatedly, I basically view evolution as having favored the dominance of agents with defect-y decision-making, even though the equilibrium of 'collaborating with each other to harness the free energy of the sun' would have been so much better. (Maybe another reason that didn't happen is that there would be less of a gradual buildup of harder and harder training environments, in that case)

In that case, it's non-farmers by a good margin.  Our ancestry goes back well over a billion years, mostly in species with short generation times.  Farming goes back roughly ten thousand years in a single species with a ~25 year generation time.

3Linch
The question specified the restriction to humans in the last 300,000 years.

Almost everyone believes one and only one of the following statements:

  1. We have, or can soon develop, the technology to power society with clean energy at a modern standard of living, or
  2. Global warming is not a major problem (e.g. it won't damage agricultural productivity to famine-inducing levels).

Logically, these statements have nothing to do with each other.  Either, neither, or both could be correct (I suspect neither).  They are, in the words of the post, questionable assumptions.

See also Beyond the Peak – Ecosophia and Germany's economy strugg... (read more)

1[comment deleted]

FYI, the green energy field is much, much worse in this regard than the AI field.

4Raemon
Curious for details.

I should have added - Determine whether this is a modeling problem or a manufacturing problem.  If the model was sound but the physical screw was faulty, you'll need an entirely different response.

Actually ideal:

  1. Reinforce that screw by the end of the day.
  2. Fix the modeling error by the end of the week.
  3. Develop a more robust modeling methodology over the next few months.
  4. Brainstorm ideas to improve the institutional culture (without sacrificing flexibility, because you're aware that these values require a tradeoff).  Have a proposal ready for the next board meeting.
3Jay
I should have added - Determine whether this is a modeling problem or a manufacturing problem.  If the model was sound but the physical screw was faulty, you'll need an entirely different response.

Now you're getting it.  The world can't be fixed.  It can't even be survived.  But it can be a nice place to live.

The worst people in the world, the Stalins and the Osama bin Ladens, try to be heroes but they're as flawed as anyone else.  If they start to succeed those flaws can manifest in horrifying ways.  They often destroy imperfect but necessary things in attempts to build perfect things that can't exist, like true Communism or functional political Islam.  Humility and temperance are called heavenly virtues for a reason.

That even though I decided that my morality would never demand that I be a hero... there nonetheless just isn't a coherent, enduring shape that fits my soul that doesn't make that the thing I ultimately want for myself.

Reading that, I'm not sure whether you're grieving because you've given up on that belief or because it's true.  I hope the former.  The desire to be a hero is dangerous - a hero needs villains.  As Nietzsche said, he who fights with monsters might take care lest he thereby become a monster.

I'm not saying that you shouldn't do... (read more)

4CronoDAS
If it's God's job, then it's definitely not going to get done, because something that doesn't exist can't do anything.

I think you're onto something.  I think, for this purpose, "child" means anyone who doesn't know enough about the topic to have any realistic chance at successful innovation.  A talented 16 year old might successfully innovate in a field like music or cooking, having had enough time to learn the basics.  When I was that age kids occasionally came up with useful new ideas in computer programming, but modern coding seems much more sophisticated.  In a very developed field, one might not be ready to innovate until several years into gradua... (read more)

A child who's educated in a Salafi school has two choices - become a Salafi or become a failed Salafi.  One of those is clearly better than the other. Salafis, like almost every adult, know how to navigate their environment semi-successfully and the first job of education is to pass on that knowledge.  It would better if the kid could be given a better education, but the kid won't have much control over that (and wouldn't have the understanding to choose well).  Kids are ignorant and powerless; that's not a function of any particular politic... (read more)

5Jiao Bu
We are also overloading the word "Child" here, which we may need to disambiguate at this point. What you are saying applies broadly to a 7 year old, and less to a 16 year old.   For the 16 year old, there's no longer 2 possible outcomes "succeed as a Salafi" or "fail as a Salafi."  There is often the very real option to "Make your way towards something else."  And the seeds of that could easily start (probably did!) in the 13 or 14 year old. It's also neat that humans are kind of wired where the great questioning/rebellion tends to happen more in the 13-to-16-year-old than the 7-year-old.  Thus the common phenomenon where the person graduates high school and church at the same time, or leaves the cult, emigrates, etc.

Yeah, that's the idea I was going for.  

I propose a test - if apologizing for or clarifying a controversial position is obviously a bad move, you're dealing with Level 4 actors.  In such cases, your critics don't care about what you believe.  Their narrative calls for a villain, and they've chosen you.

I think there are more limits than that because plausibility matters.  The set of positions Ibram X Kendi could plausibly take is very different from the positions available to Donald Trump.  Too big a reach and you'll look insincere, opportunistic, or weak.  It's easy to alienate your social coalition and much harder to gain acceptance in a new one.

You seem to be steering in the direction of postmodernism, which starts with the realization that there are many internally consistent yet mutually exclusive ways of modeling the world.  Humility won't solve that problem, but neither will a questioning mindset.  

Every intellectual dead-end was once the product of a questioning mind.  Questioning is much more likely to iterate toward a dead end than to generate useful results.  This isn't to say that it's never useful (it obviously can be), but it rarely succeeds and is only the optimal ... (read more)

1Jiao Bu
"The best way to get out of a local maximum that I've found is to incorporate elements of a different, but clearly functional, intellectual tradition." I agree wholeheartedly with this being a good way (Not sure about "best").  The crux is "clearly functional" and "maxima" -- and as an adult, I can make pretty good judgments about this.  I'm also likely to bake in some biases about this that could be wrong.  And depending on what society you find yourself within, you might do the same. If I understand you, you are basically asking to jump from one maxima to another, assuming that in doing this search algorithm, you will eventually find a maxima that's better than the one you're in, or get enough information to go back to the previous one.  And we limit our search on "functional." But what if you have little information or priors available as to what would be functional or not, or even what constitutes a maxima?  There's no information telling a child not to go join a fringe religious group, for example (and I think they often do their recruiting among the very young, for this reason).   Moreover, if someone (1) without clear criteria for what constitutes a "maxima" or "functional," or (2) who may even wish to explore other models of "functional" because they suspect their current model may be self-limiting, then we get to questioning. And I think in (2) above, I am defining the positive side of post-modernism, which also exists and contributes to our society.  The most salient criticism of post-modernism is usually that it is anti-heirarchical, yet insisting it is a better approach than those before it, constitutes a performative contradiction.  Also, I think they are sometimes guilty of taking a "noble savage" approach to other cultures or ways of thinking (failure to judge what is functional). However, if we combine the "questioning" (broad search, willing to approach with depth where it seems useful), with some level of judgement about "functional" (assumin

I don't know about Moldova, but it seems obvious that the creation of modern Israel depended on the idea that the Palestinians could be managed and equally obvious that it hasn't worked out that way.  The only real endgames are genocide or leaving and personally I'd vote for leaving.

1Arcayer
Moldova isn't the only plausible option or anything, my reasoning is just, it has good land, the population is low enough that they could be bought out a price that isn't too absurd, they're relatively poor and could use the money, it's a relatively new country with a culture similar to a number of other countries and it's squarely in western territory and thus shouldn't be much of a source of conflict.

In 14 centuries of Islamic history from Spain to Indonesia, with limited travel and much regional variation for most of it, there will be many opportunities to find examples that match our own culture's Current Thing.  Some Muslims are hypocrites; some Westerners look for homosexual subtext where none was intended.  Many Muslim empires have risen in vigor and fallen in decadence.  Still, the orthodox position is clear - homosexuality is both sinful and illegal.  I've seen a Jew eat pork and laugh it off; it would be a mistake to make po... (read more)

Trying to replace enemy leadership with more congenial leadership never works.  You reliably get a corrupt puppet government and an insurgency.  The only exception is after a comprehensive defeat of the enemy (i.e. post-WWII Germany and Japan), which begs the question (in the sense that this tactic would allow us to win only in cases where we've already won).

Answer by Jay10

One thing to keep in mind is that a lot of the food supply in the Middle East and North Africa used to come from Russia and Ukraine (link).  Actions targeted at increasing food security in the region won't solve the political issues, but probably would turn down the temperature of the region.  It would also help with the immigration crisis in Europe.  It's not an easy or quick solution (growing food takes time, and the scale of the problem is staggering), but it's hard to see how peace would come without food.

Sun Tzu says that the keys to victory lie in knowing yourself and your enemy.  When I got to #4, it became obvious that you know very little about Islam.  There are no LGBTQ+ safe spaces in Islam.  A relevant wikipedia page says "Homosexual acts were forbidden (haram) in traditional Islamic jurisprudence and therefore were subject to punishment. The types of punishment prescribed for non-heterosexual activities include flogging, stoning, and the death penalty, depending on the particular situation and the school of thought."  The major ... (read more)

3Yair Halberstadt
This hasn't historically always been the case - there was widespread public acceptance of homosexuality in the first 500 years of Islam's existence, with homoerotic poetry being a staple of their culture - see e.g. here. Judaism also unequivocally rejects homosexuality, yet many modern orthodox synagogues happily have openly gay members of their congregation. So this doesn't seem quite as impossible as you make out.
3Viliam
I guess it would make more sense to try to emulate a more secular country, such as Turkey. A country where individual people can be religious, but the state as a whole is not. A priest can tell you that something is a sin, but cannot organize your murder.

Me - Ph.D. in solid state materials chemistry.  Been out of the game for a while.  Less understanding of physics than some other commenters but have a different perspective that might be useful.

My first thought is that they have a minority phase; the samples are likely ~99% LK99 and ~1% unknown phase with weird properties.  You can see it in the video; part of the specimen is levitating but a corner of it isn't.  

The second thing I would do is try to make a bunch of variants with slightly different compositions to identify the minority ... (read more)

Strongly upvoted.  A few comments:

I think of a human being as a process, rather than a stable entity.  We begin as embryos, grow up, get old, and die.  Each step of the process follows inevitably from the steps before.  The way I see it, there's no way an unchanging upload could possibly be human.  An upload that evolves even less so, given the environment it's evolving in.

On a more practical level, the question of whether a software entity is identical to a person depends on your relationship to that person.  Let's take Elize... (read more)

A lot of the nonprofit boards that I've seen use a "consent agenda" to manage the meeting.  The way it works is:

  • The staff create the consent agenda and provide it to the board members perhaps a week in advance.
  • Any single board member can take any item off the consent agenda and onto the regular agenda.
  • The consent agenda is passed in a single motion.  It always passes unanimously, because anything that any member thinks merits attention has been moved onto the regular agenda (where it is separately discussed and voted on).

It doesn't do much for governance directly, but fewer time-wasting consent votes can make room for more discussion of issues that matter.

In the US, parties still aren't recognized by the Constitution.  Every election is a choice between all of the people who qualify for the ballot for each office.  Several groups of like-minded politicians quickly emerged, and over time these became our major parties.  

It's not uncommon for an American candidate to run as an independent (i.e. not affiliated with a party), although they hardly ever win. 

1Phil Scadden
i dont think the US government would fit the normal definition of a modern parliament. We (NZ) have had the odd independent in parliament but extremely rare - generally an electoral MP that has fallen out with their party. Much more common in Australia but they have a different voting system (preferential in Aus, versus MMP here). As to mess in Israel, they also have MMP, but with a threshold of only 3% to get an MP into parliament. Any time last 28 years that people complain that our threshold is too low, Israel and Italy are pointed to as why lowering it would be a bad idea. The US to my mind has power structure upside down - too much power concentrated in executive with little in way of handbrakes. Parliaments generally have president/monarch as constitutional backstop instead. A number of parliaments go further (eg UK, Canada, Australia and NZ) and have parliamentary supremacy where parliament can overrule both executive (aka backbench revolt) and the judiciary.

To the extent that I understand what you're saying, you seem to be arguing for curiosity as a means of developing a detailed, mechanistic ("gears-level" in your term) model of reality.  I totally support this, especially for the smart kids.  I'm just trying to balance it out with some realism and humility.  I've known too many people who know that their own area of expertise is incredibly complicated but assume that everything they don't understand is much simpler.  In my experience, a lot of projects fail because a problem that was assumed to be simple turned out not to be.

2Vladimir_Nesov
This is useless in practice and detrimental to being a living encyclopedia, distracting from facts deemed salient by civilization. Combinatorial models of more specific and isolated ideas you take an interest in, building blocks for reassembling into related ideas, things that can be played with and not just taken from literature and applied according to a standard methodology. The building blocks are not meant to reconstruct ideas directly useful in practice, it's more about forming common sense and prototyping. The kind of stuff you learn in the second year of college (the gears, mathematical tools, empirical laws), in the role of how you make use of it in the fourth year of college (the ideas reassembled from them, claims independently known that interact with them, things that can't be explained without the background), but on the scale of much smaller topics. Well, that's the attempt to channel my impression of the gears/policy distinction, which I find personally rewarding, but not necessarily useful in practice, even for research. It's a theorist's aesthetic more than anything else.

I get your point, and I totally agree that answering a child's questions can help the kid connect the dots while maintaining the kid's curiosity.  As a pedagogical tool, questions are great.  

Having said that, most people's knowledge of most everything outside their specialties is shallow and brittle.  The plastic in my toothbrush is probably the subject of more than 10 Ph.D. dissertations, and the forming processes of another 20.  This computer I'm typing on is probably north of 10,000.  I personally know a fair amount about how t... (read more)

2Vladimir_Nesov
My point is not that one should learn more, but about understanding naturally related to any given claim of fact, whose absence makes it brittle and hollow. This sort of curiosity does apply to your examples, not in a remedial way that's only actually useful for other things. The dots being connected are not other claims of fact, but alternative versions of the claim (including false ones) and ingredients of motivation for looking into the fact and its alternatives, including more general ideas whose shadows influence the claim. These gears of the idea do nothing for policies that depend on the fact, if it happens to be used appropriately, but tend to reassemble into related ideas that you never heard about (which gives an opportunity to learn what is already known about them). It doesn't require learning much more, or about toothbrushes, it's instead emphasis of curiosity on things other than directly visible claims of fact, that shifts attention to those other things when presented with a given claim. This probably results in knowing less, with greater fluency.

Being honest, for nearly all people nearly all of the time questioning firmly established ideas is a waste of time at best.  If you show a child, say, the periodic table (common versions of which have hundreds of facts), the probability that the child's questioning will lead to a significant new discovery are less that 1 in a billion* and the probability that they will lead to a useless distraction approach 100%.  There are large bodies of highly reliable knowledge in the world, and it takes intelligent people many years to understand them well e... (read more)

1Jiao Bu
"There are large bodies of highly reliable knowledge in the world,[...]" The purpose of the questioning is to find out which objects are in that bucket, and which objects are in some other bucket. If the child accepts what she is told about (A)There are large bodies of highly reliable knowledge in the world, and (B) This is one of them, then you might get many types of crazy. TH;DT:  The idea of firmly established ideas is unfortunately culturally and sub-culturally bound, at least to an extent.  Which "firmly established truths" are currently being taught in Shalafi schools?  I think the "flat-earthers, Qanon, etc...," could easily destroy the nonsense of their beliefs if they could employ a bit of the questioning. Maybe what you and I are saying is a strong case of reversible advice?
8Vladimir_Nesov
The relevant purpose is new discoveries for the child, which is quite plausible. Insufficiently well-understood claims are also not really known, even when they get to be correctly (if not validly) accepted on faith. (And siloing such claims inside appropriate faith-correctness/source-truthfulness hypotheticals is still superior to accepting them unconditionally.) There is also danger of discouraging formation of gears level understanding on the basis of irrefutability of policy level knowledge, rendering ability to make use of that knowledge brittle. The activity of communicating personal discoveries to the world is mostly unrelated.

I think a better way to look at it is that frequentist reasoning is appropriate in certain situations and Bayesian reasoning is appropriate in other situations.  Very roughly, frequentist reasoning works well for descriptive statistics and Bayesian reasoning works well for inferential statistics.  I believe that Bayesian reasoning is appropriate to use in certain kinds of cases with a probability of (1-delta), where 1 represents the probability of something that has been rationally proven to my satisfaction and delta represents the (hopefully small) probability that I am deluded.

Wars are an especially nasty type of crisis because there's an enemy.  That enemy will probably attempt to use your software for its own ends.  In the case of your refugee heatmap idea, given that the Russians are already massacring civilians, that might look like a Russian artillery commander using it to deliberately target refugees.  Alternately, they might target incoming buses to prevent the refugees from getting out of the Ukrainian military's way and make the Ukrainians spend essential resources on feeding and protecting them.  

2GabrielExists
This is an important consideration, like how could the software be misused? However, in this concept, artillery and targeting buses wouldn't be an issue, because the entire concept takes place outside Ukraine. It's specifically for getting people from Poland (or other close-by countries) to other, more distant, countries that have more capacity. For this to be a problem, they'd have to shell Polish cities, and if they do, we have bigger problems.

Does the Russian military even have the tech dependencies that would make them vulnerable to cyber attacks?  I think they're pretty analog.

I spent about 20 years in academic and industrial research, and my firm belief is that almost nobody spends nearly enough time in the library.  There have been hundreds of thousands of scientists before you; it is overwhelmingly likely that your hot new idea has been tried before.  The hard part is finding it; science is made up of thousands of tiny communities that rarely talk to each other and use divergent terminology.  But if you do the digging, you may find a paper from Egypt in 1983 that describes exactly why your project isn't working (real example).  Finding that paper two weeks into the project is much better than finding it five years later.

The US has at least 16 intelligence agencies, but we still went into Iraq.  

Oddly, it's probably easier for Putin to get credible information about Ukraine's military than about his own.  Fewer people have an interest in lying to him about Ukraine.

Richard Hanania's Public Choice Theory and the Illusion of Grand Strategy is worth reading on this general topic, although it's mostly about the Iraq war.

He doesn't use these words (that I recall, and I'm only partway through), but the behavior of states is less like rationality and more like autism.  What I mean is that states act on internal motivations that are only very weakly coupled to the objective strategic reality.  

Putin probably didn't have access to reliable information about the capabilities of his military vs. Ukraine's military. &n... (read more)

1superads91
"Putin probably didn't have access to reliable information about the capabilities of his military vs. Ukraine's military." I find that hard to believe in a country with one of the most relentless intelligence services. Could be. But I wouldn't bet on it.

Re aliens - Fair enough.  Some very simple alien, perhaps the Vulcan equivalent of a flatworm, may be well within our capability to understand.  Is that really what we're interested in?

Re machine learning - The data for machine learning is generally some huge corpus.  The question is whether we're even capable of understanding the data in something like the manner the algorithm does.  My intuition says no, but it's an open question.

I'd like to add two pieces of evidence in favor of the weak unlearnability hypothesis:

(1) Humpback whales have songs that can go on for days.  Despite decades of study, we don't really understand what they're saying.

(2) The output of machine learning algorithms (e.g. Google's Deep Dream) can be exceedingly counterintuitive to humans.

Whales are our distant cousins and humans created machine learning.  We might reasonably suppose that actual aliens, with several billion years of completely independent evolution, might be much harder to understand.

3Pattern
If they are complex, this seems obvious. Simple, less so. Even ignoring hardware, they can be modeled as being programmed by the data. Perhaps we don't understand the data?

We actually do pretty much the opposite of that in the U.S.  Student loans have a Federal guarantee, so the incentive is to sign people up for as much education as possible.  If they succeed, great.  If they fail, they'll be paying off the loans until they die at which time Uncle Sam will pay the balance.  With compounding interest, the ones who fail are the most profitable.

Load More