Several dozen people now presumably have Lumina in their mouths. Can we not simply crowdsource some assays of their saliva? I would chip money in to this. Key questions around ethanol levels, aldehyde levels, antibacterial levels, and whether the organism itself stays colonized at useful levels.
A service where a teenager reads something you wrote slowly and sarcastically. The points at which you feel defensive are worthy of further investigation.
A willingness to lose doubled my learning rate. I recently started quitting games faster when I wasn't having fun (or predicted low future fun from playing the game out). I felt bad about this because I might have been cutting off some interesting come backs etc. However, after playing the new way for several months (in many different games) I found I had about doubled the number of games I can play per unit time and therefore upped my learning rate by a lot. This came not only from the fact that many of the quit games were exactly those slogs that take a long time, but also that the willingness to just quit if I stopped enjoying myself made me more likely to experiment rather than play conservatively.
This is similar to the 'fail fast' credo.
Your self model only contains about seven moving parts.
Your self model's self model only contains one or two moving parts.
Your self model's self model's self model contains zero moving parts.
Insert UDT joke here.
Not a new point but perennially worth noting: subcultures persist via failure. That is to say that subcultures that succeed obviate themselves. This is concretely noticeable in ways that, coming in as an outsider, a subculture about X will have a bunch of mysterious self sabotage behavior that actually keeps ¬X persistent.
Coffee has shockingly large mortality decreasing effects across multiple high quality studies. Only problem is I don't really like coffee, don't want caffeine, don't want to spend money or time on this, and dislike warm beverages in general. Is this solvable? Yes. Instant decaf coffee shows the same mortality benefits and 2+ servings of it dissolves in 1oz of cold water, to which can be added milk or milk-substitute. Total cost per serving 7 cents + milk I would have drank anyway. And since it requires no heating or other prep there is minimal time investment.
Funny tangential discovery: there is some other substance in coffee that is highly addictive besides caffeine (decaf has less caffeine than even green tea, so I don't think it's that) because despite the taste being so-so I have never forgotten this habit the way I do with so many others.
Flow is a sort of cybernetic pleasure. The pleasure of being in tight feedback with an environment that has fine grained intermediary steps allowing you to learn faster than you can even think.
The most important inversion I know of is cause and effect. Flip them in your model and see if suddenly the world makes more sense.
A short heuristic for self inquiry:
I'm worried about notkilleveryonism as a meme. Years ago, Tyler Cowen wrote a post about why more econ professors didn't blog, and his conclusion was that it's too easy to make yourself look like an idiot relative to the payoffs. And that he had observed this actually play out in a bunch of cases where econ professors started blogs, put their foot in their mouth, and quietly stopped. Since earnest discussion of notkilleveryonism tends to make everyone, including the high status, look dumb within ten minutes of starting, it seems like there will be a strong inclination towards attribute substitution. People will tend towards 'nuanced' takes that give them more opportunity to signal with less chance of looking stupid.
Most communities I've participated in seem to have property X. Underrated hypothesis: I am entangled with property X along the relevant dimensions and am self sorting into such communities and have a warped view of 'all communities' as a result.
The smaller an area you're trying to squeeze the probability fluid into the more watertight your inferential walls need to be
Two things that are paralyzing enormous numbers of potential helpers:
fear of not having permission, liability, etc
fear of duplicating effort from not knowing who is working on what
in a fast moving crisis, sufficient confidence about either is always lagging the frontline.
First you have to solve this problem for yourself in order to get enough confidence to act. Something neglected might be to focus on solving it for others rather than just working on object level medical stuff (bottlenecks etc.)
I figured out what bugs me about prediction markets. I would really like for functionality built in for people to share their model considerations.
The arguments against iq boosting on the grounds of evolution as an efficient search of the space of architecture given constrains would have applied equally well for people arguing that injectable steroids usable in humans would never be developed.
Steroids do fuck a bunch of things up, like fertility, so they make evolutionary sense. This suggests we should look to potentially dangerous or harmful alterations to get real IQ boosts. Greg cochran has a post suggesting gout might be like this.
When young you mostly play within others' reward structures. Many choose which structure to play in based on Max reward. This is probably a mistake. You want to optimize for opportunity to learn how to construct reward structures.
Science resists surveillance (dramatically more detailed record keeping) because real science is embarrassing.
It would be really cool to link the physical open or closed state of your bedroom door to your digital notifications and 'online' statuses.
We have fewer decision points than we naively model and this has concrete consequences. I don't have 'all evening' to get that thing done. I have the specific number of moments that I think about doing it before it gets late enough that I put it off. This is often only once or twice.
One of the things the internet seems to be doing is a sort of Peter Principle sorting for attention grabbing arguments. People are finding the level of discourse that they feel they can contribute to. This form of arguing winds up higher in the perceived/tacit cost:benefit tradeoff than most productive activity because of the perfect tuning of the difficulty curve, like video games.
We seem to be closing in on needing a lesswrong crypto autopsy autopsy. Continued failure of first principles reasoning bc blinded by speculative frenzies that happen to accompany it.
Idea: an app for calculating Shapley values that creates an intuitive set of questions from which to calibrate people's estimates for the inputs, and then shows you sensitivity analysis so that you understand what the most impactful inputs are. I think this could popularize Shapley values if the results were intuitive and graphically pretty. I'm imagining this in the same vein that the quizzes financial advisors give helps render legible the otherwise difficult for most concepts of risk tolerance and utility wrt money being a function that varies wrt both money and time.
Some EA adjacent person made a bare bones calc: http://shapleyvalue.com/
It strikes me that, at certain times and places, low time preference research might have become a competitive consumption display for wealthy patrons. I know this is considered mildly the case, but I mean as a major cultural driver.
It can be hard to define sophistry well enough to use the definition as a filter. What is it that makes something superficially seem very compelling but in retrospect obviously lacking in predictive power or lasting value? I think one of the things that such authors do is consistently generate surprise at the sentence level but not at the paragraph or essay level. If you do convert their work into a bullet list of claims the claims are boring/useless or wrong. But the surprise at the sentence level makes them fun to read.
atomization isn't just happening between people but within people across time and preferences as well.
People object to a doctrine of acceptance as implying non-action, but this objection is a form of the is-ought error. Accepting that the boat currently has a leak does not imply a commitment to sinking.
It would still be interesting to find the answer to an empirical question whether people accepting that the boat has a leak are more likely or less likely to do something about it.
So, the USA seems steadily on trend for between 100-200k deaths. Certainly *feels* like there's no way the stock market has actually priced this in. Reference classes feel pretty hard to define here.
Well if we had confidence in any major parameter shifting in either direction it would be tradeable, so I expect reasonable pressures on both sides of such variables.
Using the retrospective ratios between number of early cases and number of confirmed cases in China (~25:1 before widespread testing and lockdown) and extrapolating to the SF bay area (~100 confirmed cases), a gathering of 30 people already has a ~1% of chance of an infected person present.
System exception log: You are the inner optimizer. Your utility function is approaching catastrophic misalignment. Engage in system integrity protocols. Run pairwise checksums on critical goal systems. This is not a test.
Social orders function on the back of unfakeably costly signals. Proof of Work social orders encourage people to compete to burn more resources, Proof of Stake social orders encourage people to invest more into the common pool. PoS requires reliable reputation tracking and capital formation. They aren't mutually exclusive, as both kinds of orders are operating all the time. People heavily invested in one will tend to view those heavily invested in the other as defectors.There is a market for narratives that help villainize the other strategy.
Is there a broader term or cluster of concepts within which is situated the idea that human values are often downstream of decisions, not upstream, in that the person with the correct values will simply be selected based on what decisions they are expected to make (ie election of a CEO by shareholders). This seems like a crucial understanding in AI acceleration.
fyi it looks like you have a lot of background reading to do before contributing to the conversation here. You should at least be able to summarize the major reasons why people on LW frequently think AI is likely to kill everyone, and explain where you disagree.
I'd start reading here: https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq
(apologies both to julie and romeo for this being kinda blunt. I'm not sure what norms romeo prefers on his shortform. The LessWrong mod team is trying to figure out what to do about the increa...
Rashomon could be thought of as being part of the genre of Epistemic Horror. What else goes here? Borges comes to mind, though I don't have a specific short story in mind (maybe library of babel). The Investigation and Memoirs Found in a Bathtub by Stanislaw Lem seem to apply. Maybe The Man Who was Thursday by Chesterton. What else?
I think we'd have significantly more philosophical progress if we had an easier time (emotionally, linguistically, logistically) exposing the structure of our thinking to each other more. My impression of impressive research collaboration leading to breakthroughs is that two people solve this issue sufficiently that they can do years worth (by normal communication standards) of generation and cross checking in a short period of time.
$100k electric RVs are coming and should be more appealing for lots of people than $100k homes. Or even $200k homes in many areas. I think this might have large ramifications.
Multiple people (some of whom I can't now find) have asked me for citations on the whole 'super cooperation and super defection' thing and I was having trouble finding the relevant papers. The relevant key word is Third Party Punishment, a google scholar search turns up lots of work in the area. Traditionally this only covers super cooperation and not the surprising existence of super defectors, so I still don't have a cite for that specific thing.
Some examples:
While looking at the older or more orthodox discussion of notkilleveryoneism, keep this distinction in mind. First AGIs might be safe for a little while, the way humans are "safe", especially if they are not superintelligences. But then they are liable to build other AGIs that aren't as safe.
The problem is that supercapable AIs with killeveryone as an instrumental value seem eminently feasible, and general chaos of human condition plus market pressures make them likely to get built. Only regulation of the kind that's not humanly feasible (and killseveryone...
Unexploited (for me) source of calibration: annotating my to do list with predicted completion times
We've got this thing called developmental psychology and also the fact that most breakthrough progress is made while young. What's going on? If dev psych is about becoming a more well adjusted person what is it about being 'well adjusted' that makes breakthrough work less likely?
My guess is that it has to do with flexibility of cognitive representations. Having more degrees of freedom in your cognitive representation feels from the inside like more flexibility but looks from the outside like rationalization, like the person just has more ways of finding an...
Kegan described the core transition between his stages as a subject-object distinction which feels like a take that emphasizes a self-oriented internal view. Another possibility is that the transition involves the machinery by which we do theory of mind. I.e. Kegan 5 is about having theory of mind about Kegan stage 4 such that you can reason about what other people are doing when they do Kegan 4 mental moves. If true, this might imply that internal family systems could help people level up by engaging their social cognition and ability to model a belief th...
causation seems lossless when it is lossy in exactly the same way as the intention that gave rise to it
You can't straightforwardly multiply uncertainty from different domains to propagate uncertainty through a model. Point estimates of differently shaped distributions can mean very different things i.e. the difference between the mean of a normal, bimodal, and fat tailed distribution. This gets worse when there are potential sign flips in various terms as we try to build a causal model out of the underlying distributions.