All of badger's Comments + Replies

If there is a net positive externality, then even large private benefits aren't enough. That's the whole point of the externality concept.

If a job requires in-person customer/client contact or has a conservative dress code, long hair is a negative for men. I can't think of a job where long hair might be a plus aside from music, arts, or modeling. It's probably neutral for Bay area programmers assuming it's well maintained. If you're inclined towards long hair since it seems low effort, it's easy to buy clippers and keep it cut to a uniform short length yourself.

Beards are mostly neutral--even where long hair would be negative--again assuming they are well maintained. At a minimum, trim it every few weeks and shave your neck regularly.

From the Even Odds thread:

Assume there are n people. Let S_i be person i's score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be

(i.e. the person's score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.

This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quad... (read more)

Not aware of any tourneys with this tweak, but I use a similar example when I teach.

If the payoff from exiting is zero and the mutual defection payoff is negative, then the game doesn't change much. Exit on the first round becomes the unique subgame-perfect equilibrium of any finite repetition, and with a random end date, trigger strategies to support cooperation work similarly to the original game.

Life is a more interesting if the mutual defection payoff is sufficiently better than exit. Cooperation can happen in equilibrium even when the end date is known (except on the last round) since exit is a viable threat to punish defection.

From an economics perspective, the stapler dissertation is real. The majority of the time, the three papers haven't been published.

It's also possible to publish empirical work produced in a few months. The issue is where that article is likely to be published. There's a clear hierarchy of journals, and a low ranked publication could hurt more than it helps. Dissertation committees have very different standards depending on the student's ambition to go into academia. If the committee has to write letters of rec to other professors, it takes a lot more work... (read more)

Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.

Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from &qu... (read more)

My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.

Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might c... (read more)

1Toggle
This looks very useful. Thanks! Another one of those interesting questions is whether the pricing system must be equivalent to currency exchange. To what extent are the traditional modes of transaction a legacy of the limitations behind physical coinage, and what degrees of freedom are offered by ubiquitous computation and connectivity? Etc. (I have a lot of questions.)

I'm on board with "absurdly powerful". It underlies the bulk of mechanism design, to the point my advisor complains we've confused it with the entirety of mechanism design.

The principle gives us the entire set of possible outcomes for some solution concept like dominant-strategy equilibrium or Bayes-Nash equilibrium. It works for any search over the set of outcomes, whether that leads to an impossibility result or a constructive result like identifying the revenue-optimal auction.

Given an arbitrary mechanism, it's easy (in principle) to find the ... (read more)

The paper cited is handwavy and conversational because it isn't making original claims. It's providing a survey for non-specialists. The table I mentioned is a summary of six other papers.

Some of the studies assume workers in poorer countries are permanently 1/3rd or 1/5th as productive as native workers, so the estimate is based on something more like a person transferred from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy is able to produce $10-15K in value.

0skeptical_lurker
Are they advocating for abolition of the minimum wage? Can one survive on 1/5th the average salery? Will the combination of inequality and race cause civil unrest?
3Lumifer
It looks to me as providing evidence for a particular point of view it wishes to promote. I am not sure of its... evenhandedness. I think that social and economic effects of immigration are a complex subject and going about trillions lying on the sidewalk isn't particularly helpful.

For context on the size of the potential benefit, an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars). The main question is the rate of migration if barriers are partially lowered, with estimates varying between 1% and 30%. Completely open migration could double world output. Based on Table 2 of Clemens (2011)

1skeptical_lurker
Do you happen to know of a scatter graph for immigration rate vs GDP? It might shed a little light on the matter, though fertility would be a cofounder.

an additional 1% migration rate would increase world GDP by about 1% (i.e. about one trillion dollars)

I am having strong doubts about this number. The paper cited is long on handwaving and seems to be entirely too fond of expressions like "should make economists’ jaws hit their desks" and "there appear to be trillion-dollar bills on the sidewalk". In particular, there is the pervasive assumption that people are fungible so transferring a person from a $5,000 GDP/capita economy to a $50,000 GDP/capita economy immediately nets you $45,000 in additional GDP. I don't think this is true.

The issue is when we should tilt outcomes in favor of higher credence theories. Starting from a credence-weighted mixture, I agree theories should have equal bargaining power. Starting from a more neutral disagreement point, like the status quo actions of a typical person, higher credence should entail more power / votes / delegates.

On a quick example, equal bargaining from a credence-weighted mixture tends to favor the lower credence theory compared to weighted bargaining from an equal status quo. If the total feasible set of utilities is {(x,y) | x^2 + ... (read more)

For the NBS with more than two agents, you just maximize the product of everyone's gain in utility over the disagreement point. For Kalai-Smodorinsky, you continue to equate the ratios of gains, i.e. picking the point on the Pareto frontier on the line between the disagreement point and vector of ideal utilities.

Agents could be given more bargaining power by giving them different exponents in the Nash product.

2owencb
Giving them different exponents in the Nash product has some appeal, except that it does seem like NBS without modification is correct in the two-delegate case (where the weight assigned to the different theories is captured properly by the fact that the defection point is more closely aligned with the view of the theory with more weight). If we don't think that's right in the two-delegate case we should have some account of why not.

Alright, a credence-weighted randomization between ideals and then bargaining on equal footing from there makes sense. I was imagining the parliament starting from scratch.

Another alternative would be to use a hypothetical disagreement point corresponding to the worst utility for each theory and giving higher credence theories more bargaining power. Or more bargaining power from a typical person's life (the outcome can't be worse for any theory than a policy of being kind to your family, giving to socially-motivated causes, cheating on your taxes a little, telling white lies, and not murdering).

4owencb
In the set-up we're given the description of what happens without any trade -- I don't quite see how we can justify using anything else as a defection point.

I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we're doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.

Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has pr... (read more)

My reading of the problem is that a satisfactory Parliamentary Model should:

  • Represent moral theories as delegates with preferences over adopted policies.
  • Allow delegates to stand-up for their theories and bargain over the final outcome, extracting concessions on vital points while letting others policies slide.
  • Restrict delegates' use of dirty tricks or deceit.

Since bargaining in good faith appears to be the core feature, my mind immediately goes to models of bargaining under complete information rather than voting. What are the pros and cons of start... (read more)

8owencb
I think there's a fairly natural disagreement point here: the outcome with no trade, which is just a randomisation of the top options of the different theories, with probability according to the credence in that theory. One possibility to progress is to analyse what happens here in the two-theory case, perhaps starting with some worked examples.
1owencb
I think the the Nash bargaining solution should be pretty good if there are only two members of the parliament, but it's not clear how to scale up to a larger parliament.

It turns out the only Pareto efficient, individually rational (ie everyone never gets something worse than their initial job), and strategyproof mechanism is Top Trading Cycles. In order to make Cato better off, we'd have to violate one of those in some way.

Metafilter has a classic thread on "What book is the best introduction to your field?". There are multiple recommendations there for both law and biology.

Since Arrow and GS are equivalent, it's not surprising to see intermediate versions. Thanks for pointing that one out. I still stand by the statement for the common formulation of the theorem. We're hitting the fuzzy lines between what counts as an alternate formulation of the same theorem, a corollary, or a distinct theorem.

1Closed Limelike Curves
Every social ranking function corresponds to a social choice function, and vice-versa, which is why they're equivalent. The Ranking→Choice direction is trivial. The opposite direction starts by identifying the social choice for a given ranking. Then, you delete the winner and run the same algorithm again, which gives you a runner-up (who is ranked 2nd); and so on. Social ranking is often cleaner than working with an election algorithm because those have the annoying edge-case of tied votes, so your output is technically a set of candidates (who may be tied).
6Vaniver
Um, not quite: Very Henry Ford-ish.

Arrow's theorem doesn't apply to rating systems like approval or range voting. However, Gibbard-Satterthwaite still holds. It holds more intensely if anything since agents have more ways to lie. Now you have to worry about someone saying their favorite is ten times better than their second favorite rather than just three times better in addition to lying about the order.

See pg. 391-392 of The Role of Deliberate Practice in the Acquisition of Expert Performance.pdf), the paper that kicked off the field. A better summary is that 2-4 hours is the maximum sustainable amount of deliberate practice in a day.

5Lumifer
Ah, so that's where you are coming from. Well, first of all "deliberate practice" is different from "learning". The paper is concerned with ability to perform which is the goal of the deliberate practice, not with understanding which is the goal of learning. Second, the paper is unwilling to commit to this number saying (emphasis mine) "...raising the possibility of a more general limit on the maximal amount of deliberate practice that can be sustained over extended time without exhaustion." I certainly accept the idea that resources such as concentration, attention, etc. are limited (though they recover over time) and you can't just be at your best all your waking time. But there doesn't seem to be enough evidence to fix hard numbers (like 2-4 hours) for that. And, of course, I expect there to be fair amount of individual variation, as well as some dependency on what exactly is it that you're learning or practicing.

I'm a PhD student working in this field and have TA'd multiple years for a graduate course covering this material.

1cursed
I'm convinced! Checked out your first post, good stuff so far.

Typo fixed now. Jill's payment should be p_Jill = 300 - p_Jack.

The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn't do it for them. The "bid and split excess" mechanism I mention at the very end could be better if people are occasionally honest.

I'm now curious what's possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It'd be fairly easy to calculate the potential welfare gain by adding a flag to the agent's type ... (read more)

I added some explanation right after the diagram to clarify. The idea is that if I can design a game where players have dominant strategies, then I can also design a game where they have a dominant strategy to honestly reveal their types to me and proceed on that basis.

4Cyan
It's funny -- the parent doesn't state anything that you didn't already put in the OP, and yet I think I understand the point a little better. Thanks!

That's an indexed Cartesian product, analogous to sigma notation for indexed summation, so is the set of all vectors of agent types.

4Vulture
Oh, okay. Hah, here I was trying to fight my instinct to automatically interpret capital-pi as a product. Thanks!

Thanks for catching that!

I did introduce a lot here. Now that I've thrown all the pieces of the model out on the table, I'll include refreshers as I go along so it can actually sink in.

0Vulture
Oh, and now that I'm going over it more carefully, another nitpick: You don't seem to actually define the notation Π_i before using it in the definition of a social choice function, and it isn't clear (to me) from context what it's supposed to mean.

Aside from academic economists and computer scientists? :D Auction design has been a big success story, enough so that microeconomic theorists like Hal Varian and Preston McAfee now work at Google full time. Microsoft and other tech companies also have research staff working specifically on mechanism design.

As far as people that should have some awareness (whether they do or not): anyone implementing an online reputation system, anyone allocating resources (like a university allocating courses to students or the US Army allocating ROTC graduates to its branches), or anyone designing government regulation.

Some exposure to game theory. Otherwise, tolerance of formulas and a little bit of calculus for optimization.

At least, I hope that's the case. I've been teaching this to economics grad students for the past few years, so I know common points of misunderstanding, but can easily take some jargon for granted. Please call me out on anything that is unclear.

Alright, that makes more sense. Random music can randomize emotional state, just like random drugs can randomize physical state. Personally, I listen to a single artist at a time.

Music randomizes emotion and mindstate.

Wait, where did "randomizes" come from? The study you link and the standard view says that music can induce specific emotions. The point of the study is that emotions induced by music can carry over into other areas, which suggests we might optimize when we use specific types of music. The study you link about music and accidents also suggests specific music decreased risks.

All the papers I'm immediately seeing on Google Scholar suggest there is no association between background music and studying effect... (read more)

2D_Malik
When I listen to music, I usually do so by putting a long multi-genre playlist on shuffle. That's what I was thinking of when I wrote that; I'll edit it. Listening to music selected to induce specific emotions seems like it could be useful. For instance, for motivation, it might be useful to play a long epic music mix.

Hmm... Atlas Shrugged does have (ostensible) paragons. Rand's idea of Romanticism as portraying "the world as it should be" seems to match up: "What Romantic art offers is not moral rules, not an explicit didactic message, but the image of a moral person—i.e., the concretized abstraction of a moral ideal." (source) Rand's antagonists do tend to be all flaws and no virtues though.

One more hypothesis after reading other comments:

HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than "rationalist fiction", a better term would be "paragon fiction". Characters have rich and conflicting motives so life isn't a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or agains... (read more)

6Luke_A_Somers
Characters act against their own interests in HPMoR... and in Hikaru no Go, for that matter. Just, you get to see why they're doing it so it seems more reasonable at the time. Which is of course how it seems to them. We're just used to characters doing things that don't seem like good ideas at the time in other works.
-2[anonymous]
I have not read HPMoR (or have a particularly strong desire to), but if this sentence is true, then HPMoR is shitty literature with a particular TVTropes name. ---------------------------------------- How many people over 35 enjoyed HPMoR?
8Nornagest
The Culture books tend to star people on the margins of the eponymous Culture: disaffected citizens, spies, mercenaries, people from other involved (and usually more conservative) civilizations. They almost always have serious character flaws (a number of them are out-and-out assholes) and while character development does occur, generally toward Culture values, it's not usually dramatic. On the other hand, the culture itself, and the AI entities that run it, are presented as having few to no flaws from the narrative's perspective. While the characters are often critical of it, it's fairly clear where the author's sympathies lie. They're not rationalist fiction in the sense that Methods is, or even in the sense that Asimov's Foundation books are. They do make for a decent stab at eutopia from a socially liberal soft-transhumanist perspective, though not an especially radical one.
7blacktrance
Atlas Shrugged comes to mind.

I'm also somewhat confused by this. I love HPMoR and actively recommend it to friends, but to the extent Eliezer's April Fools' confession can be taken literally, characterizing it as "you-don't-have-a-word genre" and coming from "an entirely different literary tradition" seems a stretch.

Some hypotheses:

  1. Baseline expectations for Harry Potter fanfic are so low that when it turns out well, it seems much more stunning than it does relative to a broader reference class of fiction.
  2. Didactic fiction is nothing new, but high quality didacti
... (read more)

One more hypothesis after reading other comments:

HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than "rationalist fiction", a better term would be "paragon fiction". Characters have rich and conflicting motives so life isn't a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or agains... (read more)

Exactly. No need to put tunnels underground when it makes substantially more sense to build platforms over existing roads. This also means cities can expand or rezone more flexibly since you can just build standard roads like now and then add bridges or full platforms when pedestrians enter the mix. Rain, snow, and deer don't require more than a simple aluminum structure.

What do you mean by applying Kelly to the LMSR?

Since relying on Kelly is equivalent to maximizing log utility of wealth, I'd initially guess there is some equivalence between a group of risk-neutral agents trading via the LMSR and a group of Kelly agents with equal wealth trading directly. I haven't seen anything around in the literature though.

"Learning Performance of Prediction Markets with Kelly Bettors" looks at the performance of double auction markets with Kelly agents, but doesn't make any reference to Hanson even though I know Pennock is... (read more)

0Slackson
Sorry, should've been more clear. I've started work on a rudimentary play money binary prediction market using LMSR in django (still very much incomplete, PM me for a link if you'd like), and my present interface is one of buying and selling shares, which isn't very user friendly. With a "changing the price" interface that Hanson details in his paper, accurate participants can easily lose all their wealth on predictions that they're moderately confident in, depending on their starting wealth. If I have it so agents can always bet, then the wealth accumulation in accurate predictors won't happen and the market won't actually learn which agents are more accurate. With an automated Kelly interface, it seems that participants should be able to input only their probability estimates, and either change the price to what they believe it to be if the cost is less than Kelly, or it would find a price which matches the Kelly criterion, so that agents with poorer predictive ability can keep playing and learn to do better, and agents with better predictive ability accumulate more wealth and contribute more to the predictions. However, I'm uncertain as to whether a) the markets would be as accurate as if I used a conventional "changing the price" interface (due to the fact that it seems we're doing log utility twice), and b) whether I can find find the Kelly criterion for this, with a probability estimate being the only user input and the rest calculated from data about the market, the user's balance, etc.

Hidden Order by David Friedman is a popular book, but is semi-technical enough that it could serve as a textbook for an intro microeconomics course.

What are a few more structured approaches that could substantially improve matters? Some improvements can definitely be made, but I disagree that outcomes are much worse. Two studies suggest marriage markets are about 20% off the optimal match (Suen and Li (1999), "A direct test of the efficient marriage market hypothesis", based on Hong Kong data, and Cao et al (2010), "Optimizing the marriage market", based on Swiss data). While 20% is not trivial, it's not a major failure.

If there are major improvements to be had, I expect it to come... (read more)

0RomeoStevens
Those are interesting papers. Thanks for the pointers. By structure yeah, I mean pretty much anything. Basically we need a secular replacement for church that provides kids with access to a variety of trusted adults so they have lots of advice to draw from. Edit: I am confused. "We reallocate approximately 68% of individuals (7 out of 10) to a new couple that we posit has a higher likelihood of survival."

Thanks for the SA paper!

The parameter space is only two dimensional here, so it's not hard to eyeball roughly where the minimum is if I sample enough. I can say very little about the noise. I'm more interested being able to approximate the optimum quickly (since simulation time adds up) than hitting it exactly. The approach taken in this paper based on a non-parametric tau test looks interesting.

The parameter space in this current problem is only two dimensional, so I can eyeball a plausible region, sample at a higher rate there, and iterate by hand. In another project, I had something with an very high dimensional parameter space, so I figured it's time I learn more about these techniques.

Any resources you can recommend on this topic then? Is there a list of common shortcuts anywhere?

0Lumifer
Well, optimization (aka search in parameter space) is a large and popular topic. There are a LOT of papers and books about it. And sorry, I don't know of a list of common shortcuts. As I mentioned they really depend on the specifics of the problem.

Not really. In this particular case, I'm minimizing how long it takes a simulation reach one state, so the distribution ends up looking lognormal- or Poisson-ish.

Edit: Seeing your added question, I don't need an efficient estimator in the usual sense per se. This is more about how to search the parameter space in a reasonable way to find where the minimum is, despite the noise.

0Lumifer
Hm. Is the noise magnitude comparable with features in your search space? In other words, can you ignore noise to get a fast lock on a promising section of the space and then start multiple sampling? Simulated annealing that has been mentioned is a good approach but slow to the extent of being impractical for large search spaces. Solutions to problems such as yours are rarely general and typically depend on the specifics of the problem -- essentially it's all about finding shortcuts.

Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.

Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?

1witzvo
You may find better ideas under the phrase "stochastic optimization," but it's a pretty big field. My naive suggestion (not knowing the particulars of your problem) would be to do a stochastic version of Newton's algorithm. I.e. (1) sample some points (x,y) in the region around your current guess (with enough spread around it to get a slope and curvature estimate). Fit a locally weighted quadratic regression through the data. Subtract some constant times the identity matrix from the estimated Hessian to regularize it; you can choose the constant (just) big enough to enforce that the move won't exceed some maximum step size. Set your current guess to the maximizer of the regularized quadratic. Repeat re-using old data if convenient.
2VincentYu
There are some techniques that can be used with simulated annealing to deal with noise in the evaluation of the objective function. See Section 3 of Branke et al (2008) for a quick overview of proposed methods (they also propose new techniques in that paper). Most of these techniques come with the usual convergence guarantees that are associated with simulated annealing (but there are of course performance penalties in dealing with noise). What is the dimensionality of your parameter space? What do you know about the noise? (e.g., if you know that the noise is mostly homoscedastic or if you can parameterize it, then you can probably use this to push the performance of some of the simulated annealing algorithms.)
2Lumifer
That rather depends on the particulars, for example, do you know (or have good reasons to assume) the characteristics of your noise? Basically you have a noisy sample and want some kind of an efficient estimator, right?

Your description of incomplete information is off. What you give as the definition of incomplete information is one type of imperfect information, where nature is added as a player.

A game has incomplete information when one player has more information than another about payoffs. Since Harsanyi, incomplete information has been seen as a special case of imperfect information with nature randomly assigning types to each player according to a commonly known distribution and payoffs given types being commonly known.

0cousin_it
Thanks, you're right and I didn't understand what the Harsanyi transformation was. Edited the post again.

The first option is standard. When the second interpretation comes up, those strategies are referred to as behavior strategies.

If every information set is visited at most once in the course of play, then the game satisfies no-absent-mindedness and every behavior strategy can be represented as a standard mixed strategy (but some mixed strategies don't have equivalent behavior strategies).

Kuhn's theorem says the game has perfect recall (roughly players never forget anything and there is a clear progression of time) if and only if mixed and behavior strategies are equivalent.

0Scott Garrabrant
Thank you, I did not know the terminology. The types of games we care about that inspire us to have to use UDT do not have perfect recall, so whether or not behavior strategies are possible is an important question. It also feels like an empirical question.

Haidt's claim is that liberals rely on purity/sacredness relatively less often, but it's still there. Some of the earlier work on the purity axis put heavy emphasis on sex or sin. Since then, Haidt has acknowledged that the difference between liberals and conservatives might even out if you add food or environmental concerns to purity.

9Viliam_Bur
Maybe it's about rationalization. The same feeling could be expressed by one person as: "this is a heresy" (because "heresy" is their party's official boo light) and by another person as: "this could harm people" (because "harming people" is their party's official boo light). But in fact both people just feel the idea is repulsive to them, but can't quickly explain why.

Yeah, environmentalist attitudes towards e.g. GMOs and nuclear power look awfully purity-minded to me. I'm not sure whether I want to count environmentalism/Green thought as part of the mainline Left, though; it's certainly not central to it, and seems to be its own thing in a lot of ways.

(Cladistically speaking it's definitely not. But cladistics can get you in trouble when you're looking at political movements.)

Haidt acknowledges that liberals feel disgust at racism and that this falls under purity/sacredness (explicitly listing it in a somewhat older article on Table 1, pg 59). His claim is that liberals rely on the purity/sacredness scale relatively more often, not that they never engage it. Still, in your example, I'd expect the typical reaction to be anger at a fairness violation rather than disgust.

3James_Miller
But since the harm is trivial, no one is being treated unfairly absent disgust considerations.

My guess is the person most likely to defend this criterion is a Popperian of some flavor, since precise explanations (as you define them) can be cleanly falsified.

While it's nice when something is cleanly falsified, it's not clear we should actively strive for precision in our explanations. An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn't mean we should give it an extra penalty.

If all explanations have equal prior probability, then Bayesian... (read more)

0DanielLC
You shouldn't give it an extra penalty. He's just using an unusual method for explaining the first penalty. The penalty due to the fact that the friend who has all colors of marbles is less likely to drop a black one is equivalently stated as a penalty due to the fact that he has more possible colors he can drop.
1JQuinton
I agree it's very Popperian, but I purposefully shied away from mentioning anything "science" related since that seemed to be a source of conflict; this person specifically thinks that science is just something that people with lab coats do and is part of a large materialist conspiracy to reject morality. But leaving any "science-y" words out of it and relying on axioms of probability theory, he rejoined with something along the lines of "real life isn't a probability game". I kinda just threw up my hands at that point, telling myself that the inferential distance is too large to cross.
0Viliam_Bur
A straw Popperian could say that the hypothesis "flipping the coin provides random results" is unscientific, because it allows any results, and thus it cannot be falsified.

I've used 1-2mg of nicotine (via gum) a few of times a month for a couple years. I previously used it a few times a week for a few months before getting a methylphenidate prescription for ADD. There hasn't been any noticeable dependency, but I haven't had that with other drugs either.

Using it, I feel more focused and more confident, in contrast to caffeine which tends to just leaves me jittery and methyphenidate which is better for focus but doesn't have the slight positive emotion boost. Delivered via gum, the half-life is short (an hour at most). That's... (read more)

1eggman
Gwern noted in his analysis of nicotine that to overcome dependency effects, the user could cycle between different nootropics they use. For example, a three day cycle of nicotine, then caffeine, then modafanil, then repeat and start over with nicotine. Over the course of several months, I could trial different methods of consuming nicotine, i.e., patches, e-cigarettes, and gum. I would space each of these trials out over the course of several months because I wouldn't want each of the trials to be spaced too close together, and I wouldn't want to mess with my body by consuming too much nicotine anyway. As a protection against my subjective experience being useless, I would read more of Gwern's reviews on nootropics, and perhaps consult online nootropics communities on their methods for noting how they feel. Their could be trials I could run, or ways of taking notes, which would allow me to make the information gleaned in that regard more useful.
Load More