All of IrenicTruth's Comments + Replies

I was frustrated by the lack of a yearly donation option or an option to make a recurring donation of less than $10/month. I almost decided not to give because this communicated that it's not worth the effort to receive contributions from small-value donors like me. And if it's not worth the effort to accept, it's certainly not worth the cost of giving.

However, I decided to give a one-time payment of $10 assuming that this was from ignorance or carelessness. If you'd like to signal that a recurring donation (or more donations from small players like me) ar... (read more)

Is it a matter of adding a standard paragraph to NIH grants? Yes. That's what I was thinking of.

If you follow standard DEI criteria, I'm commenting on LessWrong; I don't do "standard."😉

More seriously, I apologize. I should have clarified what I meant by diversity. In particular, I mean that diverse groups are spread out in a parsimonious description space.

A pretty detailed example

As a concrete example of one understanding that would match my idea of diversity, consider some very high-dimensional space representing available people who can also do the work measured on as many axes as you can use to characterize them (characteristics of mind, body,... (read more)

2ChristianKl
If you promote "diversity" then you have not only take in mind what you mean with it, but also how policy is likely going to work in practice.  In practice, there are some dimensions that are easy to measure like race and gender. There are other dimensions that are harder to measure. Some dimensions are also not conducive to research progress. Researchers with IQ under a hundred are underrepresented in grant giving. Then there are variables like vaccination status, where being unvaxxed does not result in you having a worse ability to do research in the same way as having a lower IQ but there are perspectives on medical research that will correlate with vaccination status.  If your policy tries to increase the representation of unvaxxed researchers, that might be threatening to hegemonic beliefs and thus a research bureaucracy likely prefers increasing representation of minority races that are unlikely to threaten any hegenomic beliefs.  If you don't specify the dimensions, the dimensions that are going to selected are most likely those that don't threaten hegemony of current opinions and thus the dimensions that are least likely to actually matter for diversity of ideas and the selected dimensions might even be chosen to strengthen the hegemony of the existing ideas.  If you actually want real diversity by doing things like calling for diversity in vaccination status you should do that explicitly. 

[I] suspect [vaccines] (or antibiotics) account for the majority of the value provided by the medical system

Though I agree that vaccines and antibiotics are extraordinarily beneficial and cost-effective interventions, I suspect you're missing essential value fountains in our medical system. Two that come to mind are surgery and emergency medicine.

I've spoken to several surgeons about their work, and they all said that one of the great things about their job is seeing the immediate and obvious benefits to patients. (Of course, surgery wouldn't be nearly ... (read more)

  1. If a product derives from Federally-funded research, the government owns a share of the IP for that product. (This share should be larger than the monetary investment in the grants that bore fruit since the US taxpayer funds a lot of early-stage research, only a little of which will result in IP. So, this system must account for the investments that didn't pan out as part of the total investment required to produce that product.)
  2. Fund grants based on models of downstream benefit. Four things that should be included as "benefits" in this model are increase
... (read more)
2ChristianKl
How would you do that in practice? Is it a matter of adding a standard paragraph to NIH grants?
2ChristianKl
There are different kinds of diversity.  It seems to me like the decision of the Ida Rolf Foundation to start funding research had good downstream effects that we see in recent advances in understanding fascia. That foundation being able to fund things that the NHI wouldn't fund was important. Getting a knowledge community like the Rolfers included in academic researchers is diversity that produces beneficial research outcomes. If you follow standard DEI criteria, it doesn't help you with a task like integrating the Rolfing perspective. It doesn't get you to fund a white man like Robert Schleip.  I would suspect that coming from a background of economic poverty means that you likely have less slack that you can use to learn about knowledge communities besides the mainstream academic community. Having the time to spent in relevant knowledge communities, seems to me like a sign of economic privilege.  Maybe, you could get something relevant by focusing on diversity of illness burden within your researcher community as people with chronic illnesses might have spent a lot of time acquiring knowledge that produces useful perspectives, but I doubt that standard DEI criteria get you there. 

I shy away from fuzzy logic because I used it as a formalism to justify my religious beliefs. (In particular, "Possibilistic Logic" allowed me to appear honest to myself—and I'm not sure how much of it was self-deception and how much was just being wrong.)

The critical moment in my deconversion came when I realized that if I was looking for truth, I should reason according to the probabilities of the statements I was evaluating. Thirty minutes later, I had gone from a convinced Christian speaking to others, leading in my local church, and basing my life and... (read more)

The next post is Secular interpretations of core perennialist claims. Zhukeepa should edit the main text to explicitly link to it rather than just mentioning that it exists. (Or people could upvote this comment so it's at the top. I don't object to more good karma.)

2Ben Pace
Good point. I have edited it into the last line of the post.

I think you're missing a few parts. The Autofac (as specified) cannot reproduce the chips and circuit boards required for the AI, the cameras' lenses and sensors, or the robot's sensors and motor controllers. I don't think this is an insurmountable hurdle: a low-tech (not cutting-edge) set of chips and discrete components would serve well enough for a stationary computer. Similarly, high-res sensors are not required. (Take it slow and replace physical resolution with temporal resolution and multiple samples.)

Second, the reproduced Autofacs should be built ... (read more)

3Carl Feynman
If you look at what I wrote, you will see that I covered both of these.

For large enough cases, changing the legal system is a way to make the debtor/lender "disappear." Ownership and debt are both based on society-level agreement.

The "current leader is also the founder" is a reasonable characteristic common in cults. Many cult-like religious organizations exist to create power or wealth for the founder or the founder's associates.

However, I suspect that the underlying scoring function is a simple additive model (widespread in psychology) in which each answer contributes a weight toward one of the outcomes. Since this characteristic is most valuable in combination - intensifying other factors that indicate cultishness, it doesn't serve very well in the current framework.

You may want to mention in the first question asking about cultishness that people will get to revise their initial estimate after seeing the rest of the questions. I discarded and restarted the survey halfway through because I realized your definition was far removed from my initial one. If I'd known about the ability to re-estimate at the end, you'd have another data point. (For reference, my initial number was 25%, which I dropped to 4% on the re-run. The final score ended up being 3%.)

Your argument boils down to:

  • Objectivity is X
  • Y is not X
  • (Because you want to be objective) Don't do Y

I want to Win. Being Pascal Mugged is not Winning. Therefore I will make choices to not be Pascal Mugged. If that requires not being "objective," according to your definition, I don't want to be objective.

However, I have my own use of "objective" that comports well with adapting to new information and using my predictive powers. But I don't want to argue that my usage is better or worse; it will be fruitless. I mention it so readers won't think I'm hypo... (read more)

I haven't listened to the video yet. (It's very long, so I put it on my watch-later list.) Nor have I finished Eliezer's Sequences (I'm on "A Technical Explanation of Technical Explanation.") However, I looked at the above summaries to decide whether it would be worth listening to the video.

Potential Weaknesses

  • None of the alternative books say anything about statistics. A rough intro to Bayesian statistics is an essential part of the Sequences. Without this, you have not made them superfluous.
    • A rough understanding of Bayesian statistics is a valuable t
... (read more)

Duplicating the description

TimePoints

  • 00:00 intro
  • 0:53 most of the sequences aren't about rationality; AI is not rationality
  • 3:43 lesswrong and IQ mysticism
  • 32:20 lesswrong and something-in-the-waterism
  • 36:49 overtrusting of ingroups
  • 39:35 vulnerability to believing people's BS self-claims
  • 47:35 norms aren't sharp enough
  • 54:41 weird cultlike privacy norms
  • 56:46 realnaming as "doxxing"
  • 58:28 no viable method for calling out rumors/misinformation if realnaming is 'doxxing'
  • 1:00:16 the strangeness and backwardness of LW-sphere privacy norms
  • 1:04:07 EA: disr
... (read more)
1IrenicTruth
I haven't listened to the video yet. (It's very long, so I put it on my watch-later list.) Nor have I finished Eliezer's Sequences (I'm on "A Technical Explanation of Technical Explanation.") However, I looked at the above summaries to decide whether it would be worth listening to the video. Potential Weaknesses * None of the alternative books say anything about statistics. A rough intro to Bayesian statistics is an essential part of the Sequences. Without this, you have not made them superfluous. * A rough understanding of Bayesian statistics is a valuable tool. * Anecdote: I took courses in informal logic when I was a teenager and was aware of cognitive biases. However, the a-ha moment that took me out of the religion of my childhood was to ask whether a particular theodicy was probable. This opened the way to ask whether some of my other beliefs were probable (not possible, as I'd done before). Within an hour of asking the first question, I was an atheist. (Though it took me another year to "check my work" by meeting with the area pastors and elders.) I thought to ask it because I'd been studying statistics. So, for me, the statistical lens helped in the case where the other lenses failed to reveal my errors. I already knew a hoard of problems with the Bible, but the non-probabilistic approaches allowed me to deal with the evidence piece by piece. I could propose a fix for each one. For example, following Origen, I could say that Genesis 1 was an allegory. Then it didn't count against the whole structure. * The above anecdote took place several years before I encountered LessWrong. I'm not saying that the Sequences/LessWrong helped me escape religion. I'm saying that Bayesian stats worked where other things failed, so it was useful to me, and you should not consider that you've replaced the sequences if you leave it out. * Handbook of the History of Logic: The Many Valued and Nonmonotonic Turn in Logic is on the reading list. I haven't read it, but t

Reading the comments here, I think I may halve my estimate of self-install time.

I've wanted to install a bidet for 8+ years. However, I've always had higher-priority projects.

Costs that deter me:

  • What for you is a 20-minute project will be 4-8 hours for me because it involves plumbing (and I want it to not leak). The fastest plumbing project I've ever had (cleaning the p-trap beneath the bathroom sink) took 1.5 hours.
  • Hiring a contractor will be $100 because I live in a high-rent area, and they need to cover the expense of coming out. It will take me 1 hour to choose, schedule, and oversee a contractor.
  • I don't know how to choose a b
... (read more)
2Lakin
huh, you have large estimates anyways thanks for the aella link

Hint for those who want to read the text at the link: go to the bottom and click "view source" to get something that is not an SVG.

The best explanation I have found to explain this discrepancy is that ... RLACE ... finds ... a direction where there is a clear separation,

You could test this explanation using a support vector machine - it finds the direction that gives the maximum separation.

(This is a drive-by comment. I'm trying to reduce my external obligations, so I probably won't be responding.)

1Fabien Roger
The original paper of INLP uses a support vector machine and finds very similar results, because there isn't actually a margin, data is always slightly mixed, but less when looking in the direction found by the linear classifier. (I implemented INLP with a linear classifier so that it could run on the GPU). I would be very surprised if it made any difference, given that L2 regularization on INLP doesn't make a difference.

A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.

(Note: I won't respond to anything you write here. I have too many things to respond to right now. But I saw the negative vote total and no comments, a situation I'd find frustrating if I were in it, so I wanted to give you some idea of what someone might disagree with/consider sloppy/wish they hadn't spent their time reading.)

2mu_(negative)
"For example, if I were making replicators, I'd ensure they were faithful replicators " Isn't this the whole danger of unaligned AI? It's intelligent, it "replicates" and it doesn't do what you want. Besides physics-breaking 6, I think the only tenuous link in the chain is 5; that AI ("replicators") will want to convert everything to comptronium. But that seems like at least a plausible value function, right? That's basically what we are trying to do. It's either that or paperclips, I'd expect. (Note, applaud your commenting to explain downvote.)
1Alex Beyman
>"A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3." This assumes three things: First, the continued use of deterministic computing into the indefinite future. Quantum computing, though effectively deterministic, would also increase the opportunity for copying errors because of the added difficulty in extracting the result. Second, you assume that the mechanism which ensures faithful copies could not, itself, be disabled by radiation. Third, that nobody would intentionally create robotic evolvers which not only do not prevent mutations, but intentionally introduce them.  The article also addresses the possibility that strong AI itself, or self replicating robots, are impossible (or not evolvable) when it talks about a future universe saturated instead with space colonies: "if self replicating machines or strong AI are impossible, then instead the matter of the universe is converted into space colonies with biological creatures like us inside, closely networked. "Self replicating intelligent matter" in some form, be it biology, machines or something we haven't seen yet. Many paths, but to the same destination."  >"But I saw the negative vote total and no comments, a situation I'd find frustrating if I were in it," I appreciate the consideration but assure you that I feel no kind of way about it. I expect that response as it's also how I responded when first exposed to ideas along these lines, mistrusting any conclusion so grandiose that I did not put together on my own. LessWrong is a haven for people with that mindset which is why I feel comfortable here and why I am not surprised, disappointed or offended that they would also reject a conclusion like this at first blush, only coming around to it months or years later, upon doing the internal legwork themselves. 

Feature request: some way to keep score. (Maybe a scoring mode that makes the black box an outline on hover and then clicking right=unscored, left-right=correct, and left-left-right=incorrect - or maybe a mouse-out could be unscored and left = incorrect and right = correct).

I haven't finished reading this; I read the first few paragraphs and scanned the rest of the article to see if it would be worth reading. But I want to point out that starting with Harsanyi's Utilitarianism Theorem (a.k.a. Harsanyi's Impartial Observer Theorem) implies that you assume "independence of irrelevant alternatives" because the theorem assumes that its agents obey [1] the Von Neumann–Morgenstern utility theorem. The fourth axiom of this theorem (as listed in Wikipedia) is the "independence of irrelevant alternatives.". Since from the previous art... (read more)

You're trying to bake your personal values (like happy humans) into the rules.

My point is that this has already happened. The underlying assumptions bake in human values. The discussion so far did not convince me that an alien would share these values. I list instances where a human might object to these values. If a human may object to "a player which contributes absolutely nothing ... gets nothing," an alien may object too; if a human may object to "the only inputs are the set of players and a function from player subsets to utility," an alien may obj... (read more)

5Dweomite
While an alien (or a human) could in principle object to literally any rule (No Universally Compelling Arguments), I think "players who contribute nothing get nothing" is very reasonable on purely pragmatic grounds, because those players have nothing to bargain with.  They are effectively non-players. If you give free resources to "players" who contribute nothing, then what stops me from demanding additional shares for my pet rock, my dead grandparents, and my imaginary friends?  The chaa division of resources shouldn't change based on whether I claim to be 1 person or a conglomerate of 37 trillion cells that each want a share of the pie, if the real-world actions being taken are the same under both abstractions. Also, I think you may be confusing desiderata with assumptions.  "Players who contribute nothing get nothing" was taken as a goal that the rules tried to achieve, and so it makes sense (in principle) to argue about whether that's a good goal.  Stuff like "players have utility functions" is not a goal; it's more like a description of what problem is being solved.  You could argue about how well that abstraction represents various real scenarios, but it's not really a values statement.

There are quite a few assumptions to pin down solutions that seem to unnecessarily restrict the solution space for bargaining strategies. For example,

  1. "A player which contributes absolutely nothing to the project and just sits around, regardless of circumstances, should get 0 dollars."

    We might want solutions that benefit players who cannot contribute. For example, in an AGI world, a large number of organic humans may not be able to contribute because overhead swamps gains from trade in comparative advantage. We still want to give these people a slice of

... (read more)
3Dweomite
This isn't a philosophical post about how you would reshape the world if you had godlike powers to dictate terms to everyone; it's a mathematical post about how agents with conflicting goals can reach a compromise. You're trying to bake your personal values (like happy humans) into the rules.  If all the players in the game already share your values, you don't need to do that, because it will already be reflected in their utility functions.  If all players in the game don't share your values (e.g. aliens), then why would they agree to divide resources according to rules that explicitly favor your values over theirs?

I use the "Bearable" app for very rough time logging. It has a system of toggles for "factors" where you can specify what factor was present in a 6-hour interval of your day. Since I am mainly interested in correlations with other things I measure, a primary purpose of "Bearable," this low resolution is a good compromise. It also makes it easy to log after the fact. "Did I do this activity in this 6-hour period?" is a much easier question than remembering down to an hour or quarter-hour granularity. The downside is I can't tell how much time I've invested ... (read more)

I think learning is likely to be a hard problem in general (for example, the "learning with rounding problem" is the basis of some cryptographic schemes). I am much less sure whether learning the properties of the physical or social worlds is hard, but I think there's a good chance it is. If an individual AI cannot exceed human capabilities by much (e.g., we can get an AGI as brilliant as John von Neumann but not much more intelligent), is it still dangerous?

2jimrandomh
John Von Neumann probably isn't the ceiling, but even if there was a near-human ceiling, I don't think it would change the situation as much as you would think. Instead of "an AGI as brilliant as JvN", it would be "an AGI as brilliant as JvN per X FLOPs", for some X. Then you look at the details of how many FLOPs are lying around on the planet, and how hard it is to produce more of the, and depending on X the JvN-AGIs probably aren't as strong as a full-fledged superintelligence would be, but they do probably manage to take over the world in the end.

You may want to look at what happens with test data never shown to the network or used to make decisions about its training. Pruning often improves generalization when data are abundant compared to the complexity of the problem space because you are reducing the number of parameters in the model.

Going from "Parts" to "Self," you said the Self might be all the Parts processing together. (Capitalized "Self" means the IFS "Core Self.") How likely is the hypothesis that the Self is an artifact of the therapeutic procedure? When someone says they feel angry at a Part and claims that anger does not come from a Part but is their self, the therapist doesn't accept it. The therapist tells them they need to unblend. But when they describe the 8 C's and say that is their self, the therapist does not ask them to unblend, perceiving that as their Self.

2Kaj_Sotala
Oops, never got around answering this question. When you ask how likely it is that it's an artifact of the therapeutic procedure, what's the alternative hypothesis you have in mind? What would not being an artifact of the therapeutic procedure mean?

For lefties:

  • We put unaligned AIs in charge of choosing what news people see. Result: polarization resulting in millions of deaths. Let's not make the same mistake again.

For right-wingers:

  • We put unaligned AIs in charge of choosing what news people see. Result: people addicted to their phones, oblivious to their families, morals, and eroding freedoms. Let's not make the same mistake again.

YouTubers live in constant fear of the mysterious, capricious Algorithm. There is no mercy or sense, just rituals of appeasement as it maximizes "engagement." Imagine that, but it runs your whole life.

<Optional continuation:> You don't shop at Hot Topic because you hear it can hurt your ranking, which could damage your next hiring opportunity. And you iron your clothes despite the starch making you itch because it should boost your conscientiousness score, giving you an edge in dating apps.

Do we serve The Algorithm, or does it serve us? Choose before The Algorithm chooses for you.

That kid who always found loopholes in whatever his parents asked him? He made an AI that's just like him.

COVID and AI grow exponentially. In December 2019, COVID was a few people at a fish market. In January, it was just one city. In March, it was the world. In 2010, computers could beat humans at Chess. In 2016, at Go. In 2022, at art, writing, and truck driving. Are we ready for 2028?

Someone who likes machines more than people creates a machine to improve the world. Will the "improved" world have more people?

Humanity's incompetence has kept us from destroying ourselves. With AI, we will finally break that shackle.

Orwell's boot stamped on a human face forever. The AI's boot will crush it first try.

Hey Siri, "Is there a God?" "There is now."

 - Adapted from Fredric Brown, "The Answer" - for policymakers.

1Peter Berggren
This seems like it falls into the trap of being "too weird" for policymakers to take seriously. Good concept; maybe work on the execution a bit?
1trevor
Many policymakers might think that an AGI would fly to the center of the galaxy and duke it out with God/Yaweh, mano a mano. I've seen singularitarians who've had that dilemma. Religion doesn't really preclude any kind of intelligent thought or impede anyone from getting into any position, since it starts from birth, and rarely insists on any statement (other than powerful stipulations about what a superhuman entity is supposed to look like)

Rationalism requires stacktraces terminating in irrefutable observation

Like the previous two commenters, I find this statement odd. I don't fully trust my senses. I could be dreaming/hallucinating. I don't fully trust my knowledge of my thoughts. By this definition of a rationalist, I could never be one (and maybe I'm not) because I don't think there is such a thing as an irrefutable observation. I think there was a joke in that statement, but, unobserved by me, it took flight and now soars somewhere else.

Like pjeby, I think you missed his point. He was not arguing from authority, he was presenting himself as evidence that someone tech-savvy could still see it as a trap. His actual reason for believing it is a trap is in his reply to GWS.

If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort. 


This choice doesn't exist; permanent death is inevitable under known physics. All lifespans are finite because the time the universe will support consciousness is most likely finite, whether because of heat death or the big rip. This finiteness makes your "you save one life, and 7 billion humans suffer for 100 billion years" question not at all obvious. Savin... (read more)

I had a similar issue. I could not do the exercise because I could not figure out how to evaluate confidence and competence separately. I always end up on the x=y line. Reading this thread did not help. "Anticipated okayness of failure" doesn't change much with time for the same task, so that is a vertical line. "Confidence" = "Self-related ability to improve" is an interesting interpretation (working on "confidence" would be working on learning skills). Still, intuitively it feels off from what the graphs say (though I haven't been able to put the disconnect into words). Thinking about the improv/parachute graph, maybe "confidence" is "willingness to attempt a task despite being incompetent." I'm giving up for now.

I found a review on Amazon (quoted at the bottom, since I cannot link to it) that says Ecker is injecting significant personal opinion and slanting his report of the science. I don't know if this is true, but the gushing praise from readers and psychology's history of jumping on things rather than evaluating evidence make it seem more likely than not. For me, this means that reading this book will involve getting familiar with the associated papers.

The Review

by "scholar"

Previously I posted a very positive review of this book. On further reflection and st

... (read more)
2CraigMichael
Did you finish reading it?

If recoupments occur sparingly, as I'd expect, where should the remaining funds go?

Keep them for "times of national emergency" etc. to hedge against correlated risk.

How big is the risk that the fund will be used in illicit ways, such as tax evasion, despite the fact that donors cannot claim more than they spent?

Modern society strongly incentivizes misusing anything that touches money, so without further evidence, I'd say that the risk is very high (near certainty). If we haven't found a way to misuse it, it is more likely that we are not clever enou... (read more)

1bice
Unless they are in a legitimate emergency situation, they are defrauding charity. Unfortunately, this doesn't stop everyone, but if they are caught, they would lose all of the money they donated. If I were the one committing fraud, this would seem very risky to me. Fraudulent claims usually have different characteristics than non-fraudulent claims. If someone claims the full amount they're entitled to on a large sum of money, the fund should investigate that claim before giving the money.

What experimental tests has clash theory survived?

2Chris Land
All of them, but also none of them. It has successfully explained (to my own satisfaction only) every humor example I've ever encountered, including extreme outliers. It's a reasonably comprehensive examination of all causes of humor response variability (but maybe there are some I missed). Clash Theory explains, predicts response, and assists construction both in editing and in generating. However, independent experimental testing of Clash Theory has never been done. Not yet. I would like it to, but I've found my wishes are seldom granted immediately. I've met people who run humor experiments and I find their work extremely interesting. I'm not set up to run any experiments (I'm a theoretician), but in any case it's a task better done by people who are not me. I'm sure I've made errors or missed nuances or expressed ideas in ways that could be improved. Why Funny Is Funny mentions many specific technical areas for further research. Quite probably some or much of this has already been done and I haven't encountered it yet.  

Take all the metaphysical models of the universe that any human ever considers.

This N is huge. Approximate it with the number of strings generatable in a certain formal language over the lifetime of the human race. We're probably talking about billions even if the human race ceases to exist tomorrow. (Imagine that 1/7 of the people have had a novel metaphysical idea, and you get 1B with just the people currently on earth today. If you think that's a high estimate, remember that people get into weird states of consciousness (through fever, drugs, exertion, ... (read more)

2alexgieg
Those aren't metaphysical. Metaphysics is a well defined philosophical research field.

If you can model everything as tasks, FogBugz has a feature I used to help myself complete grad school: https://fogbugz.com/evidence-based-scheduling/, which gives you a probability distribution over finishing times. It was incredibly useful! You might want to start the free trial to see if they still have the "if you have too few users, you can use it for free until you get big enough" deal they used to have.

As of (X years ago) it was missing appointment scheduling.

My most recent solution for individual scheduling is Skedpal. It does not have the overshoo... (read more)

2Bart Bussmann
Your Skedpal link leads to a sketchy site. I believe you meant Skedpal.  

The earliest citation in Wikipedia is from 1883, and it is a question and answer: "If a tree were to fall on an island where there were no human beings would there be any sound?" [The asker] then went on to answer the query with, "No. Sound is the sensation excited in the ear when the air or other medium is set in motion."

So, if this is truly the origin, they knew the nature of sound when the question was first asked.

Re: dominant assurance contracts/crowdfunding

The article makes the bad assumption that , the distribution of individual values of the public good is common knowledge. A good entrepreneur will do market research to try and determine . But better approximations cost more. Entrepreneurs will also be biased to think their idea is good. So, it is likely that many entrepreneurs will have bad models. Most individuals will also not know . So, there is another mode to profit for the small fraction of individuals who have decent approximat... (read more)

> knowing about the MCU, no matter how cool, doesn't pay rent

Is not "enables socialization" a form of rent?

In favor of this particular point, I know about the MCU despite disliking superhero movies and comics (except Watchmen) precisely because it is helpful in my social circles.

Regarding @jaspax's main point, it is not obvious that formal education is necessary to generate a shared mythopoetic structure. OTOH I can't think of an example of a long-lasting one that does not have a group actively involved in educating people about it. So, it is not obvious that it is a poor candidate for formal education either.

Load More