All of pnrjulius's Comments + Replies

This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.

Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.

Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without... (read more)

1gwern
Not much of a point in nukes' favor since there are so many other ways to redirect asteroids; even if nukes had a niche for taking care of asteroids very close to impact, it'd probably be vastly cheaper to just put up a better telescope network to spot all asteroids further off.
0Rob Bensinger
Nukes and bioweapons don't FOOM in quite the way AGI is often thought to, because there's a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.) I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity. Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development. Two counter-arguments to the anti-apocalypse argument: 1. A catastrophe that didn't devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn't kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful. 2. A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it's easier to explain AI risk to a few dictators than to a lot of voters. As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh t
6Eliezer Yudkowsky
To be clear, the question is not whether we should divert resources from FAI research to trying to slow world economic growth, that seems risky and ineffectual. The question is whether, as a good and ethical person, I should avoid any opportunities to join in ensembles trying to increase world economic growth.

Well, ultimately, that was sort of the collective strategy the world used, wasn't it? (Not quite; a lot of low-level Nazis were pardoned after the war.)

And you can't ignore the collective action, now can you?

It's more a relative thing---"not quite as extremely biased towards academia as the average group of this level of intellectual orientation can be expected to be".

If so, then we're actually more rational right? Because we're not biased against academia as most people are, and aren't biased toward academia as most academics are.

It's not quite so dire. You can't do experiments from home usually, but you can interpret experiments from home thanks to Internet publication of results. So a lot of theoretical work in almost every field can be done from outside academia.

0DanArmak
Yes, but in most fields someone can't participate by only interpreting experiments from home. It's useful, but you can't build a career from it. Normally you really want to also be able to influence experiments in the lab to get the new data you want.

otherwise we would see an occasional example of someone making a significant discovery outside academia.

Should we all place bets now that it will be Eliezer?

5Shmi
He repeatedly mentioned that his skill is in formulating theorems, not proving them, and he has not formulated even one after some 5 years of working on the same problem, so the chances are not good.

Negative selection may be good, actually, for the vast majority of people who are ultimately going to be mediocre.

It seems like it may hurt the occasional genius... but then again, there are a lot more people who think they are geniuses than really are geniuses.

In treating broken arms? Minimal difference.

In discovering new nanotechnology that will revolutionize the future of medicine? Literally all the difference in the world.

3CronoDAS
Then don't you want the brilliant person to become a chemist or biologist instead of a physician or surgeon? You don't need a medical license to work on cell cultures in a lab.

I think a lot of people don't like using percentiles because they are zero-sum: Exactly 25% of the class is in the top 25%, regardless of whether everyone in the class is brilliant or everyone in the class is an idiot.

0Viliam_Bur
But on the opposite side of the spectrum is saying: "Everyone is so smart because they can read and write, and thousand years ago most people couldn't do this!" (Strawman example, I know.) Generally I would prefer to have a list (tree, directed acyclic graph...?) of all human knowledge, and give everyone a report saying: "This person understands these parts." But over time, the list/tree is growing. Of course it is OK to know a smaller part of total human knowledge, because the population is growing; but still, you need to know more than your ancestors (if your computer skills are the same as your grandma's, then she is a hero and you are a loser); on the other hand some knowledge becomes obsolete. I think a percentile across the whole country would be a good measure for comparing individual students or schools. And it would be nice to also calculate long-term changes to know whether the country as a whole is improving.

Well, you want some negative selection: Choose dating partners from among the set who are unlikely to steal your money, assault you, or otherwise ruin your life.

This is especially true for women, for whom the risk of being raped is considerably higher and obviously worth negative selecting against.

0Desrtopa
That carries the assumption that the qualities you're positively selecting for don't have a strong negative correlation with the ones you're trying to select against. I don't think it's hard to lay out a few basic "are" qualities that imply "are not" for "violent, thief, etc."

I don't think it's quite true that "fail once, fail forever", but the general point is valid that our selection process is too much about weeding-out rather than choosing the best. Also, academic doesn't seem to be very good at the negative selection that would make sense, e.g. excluding people who are likely to commit fraud or who have fundamentally anti-scientific values. (Otherwise, how can you explain how Duane Gish made it through Berkeley?)

I'm saying that the truth is not so horrifying that it will cause you to go into depression.

This is what I hope and desire to be true. But what I'm asking for here is evidence that this is the case, to counteract the evidence from depressive realism that would seem to say that no, actually the world is so terrible that depression is the only rational response.

What reason do we have to think that the world doesn't suck?

-1RRam
We have lived this far. Our forefathers lived here successfully satisfying their wishers. Our children will also live here. That is the evidence, reason and inspiration to face sucking world and make it more comfortable

Politico, PolitiFact, FactCheck.org

3Raw_Power
Thank you very much for sharing these. I am very glad to find out that such organizations exist.
pnrjulius170

The mutilation of male genitals in question is ridiculous in itself but hardly equivalent to the kind of mutilation done to female genitals.

Granted. Female mutilation is often far more severe.

But I think it's interesting that when the American Academy of Pediatrics proposed allowing female circumcision that really just was circumcision, i.e. cutting of the clitoral hood, people were still outraged. And so we see that even when the situation is made symmetrical, there persists what we can only call female privilege in this circumstance.

-2MugaSofer
See, now I'm wondering what the effects would actually be. Is it possible that "true" female circumcision would still have greater adverse effects? I'll note that I predict roughly the same outrage level regardless, but it still seems like an important question.

I know with 99% probability that the item on top of your computer monitor is not Jupiter or the Statue of Liberty. And a major piece of information that leads me to that conclusion is... you guessed it, the circumference of Jupiter and the height of the Statue of Liberty. So there you go, this "irrelevant" information actually does narrow my probability estimates just a little bit.

Not a lot. But we didn't say it was good evidence, just that it was, in fact, evidence.

(Pedantic: You could have a model of Jupiter or Liberty on top of your computer, but that's not the same thing as having the actual thing.)

The statistical evidence is that liberalism, especially social liberalism, is positively correlated with intelligence. This does not prove that liberalism is correct; but it does provide some mild evidence in that direction.

2tlhonmey
As an interesting phenomenon, I've noticed that when I question people in-depth about their beliefs on specific issues what they actually want is often seriously at odds with the political group to which they claim to adhere. It's almost like political affiliations are tribal memberships and people engage in double-think to not risk those memberships even when having that membership doesn't form a coherent whole with the rest of their ideology. To the extent which IQ actually matters, I've noticed two patterns:   Firstly, to a certain extent, those with higher IQ tend to spend more years of their life in school, and most schools have a very definite liberal or conservative culture and actively punish "wrongthink" to a certain degree.  So IQ correlation with political faction may be more indicative of the ratio between schools than anything else. Secondly, once a person's IQ gets into the 130+ range you seem to start finding a higher fraction of people who really despise the stupidity and waste of primate social politics and so prefer consistency of internal logic over maintaining good tribal standing.  These people are actually interesting to talk to about politics because they're actually interested in what the facts are and in whether or not policy actually meets its goals.  Even when you disagree with their conclusions, you don't have to spend all your time pointing out the same contradictions again and again.
0waveman
It would provide significantly useful evidence, if we had no other information to determine the truth of the tenets of conservatism. Given that we do, and that the 'evidence' provided by who believes liberalism vs conservatism is not strong, I suggest it is better to ignore it. Why? Because using these sorts of arguments are very dangerous because they so readily degenerate into overvaluing social proof.
1BlueAjah
Declaration of bias: I am a liberal, I am intelligent, but I'm not a Democrat or Republican. It's hard to measure liberalism. For example, half the black people say they are conservative and half say they are liberal. But most outsiders would say most black people are liberal (and it's common for 100% of black people in an area to vote for Obama). People judge their liberalism against people like themselves, so it's hard to compare groups. If you count most black people as liberals, then that intelligence difference between liberals and conservatives might disappear (if it exists, I haven't checked). For example, it's a proven fact that Republicans are smarter than Democrats (because of black people with an average IQ of 85 voting Democrat), although just between white people there is no real difference. You also need to consider that intelligence comes with biases, even though it also improves your thinking. Intelligent people are biased towards things that benefit intelligent people, eg. complexity, even if they hurt other people. Intelligent people are biased towards letting people do whatever they want, because intelligent people like themselves will do sensible things when given the choice. They aren't used to stupid people, who do stupid things when allowed to do whatever they want. Intelligent people need freedom, while stupid people need strong inviolable guidelines about acceptable behaviour.
-1Stephenjk
How are values are true or false. You seem to be arguing for objectivist morality. Consider, all the greatest minds in Philosophy, specifically ethics, believed in consequentialism. This provides no weight towards or against that particular ethical system. No one has value expertise. People can value one thing (security) or another (liberty). Inset whatever values as necessary. The same is true with progressives and conservatives generally. That fact provides no weight towards what we should value.

It's a subtle matter, but... you clearly don't really mean determinism here, because you've said a hundred times before how the universe is ultimately deterministic even at the quantum level.

Maybe predictability is the word we want. Or maybe it's something else, like fairness or "moral non-neutrality"; it doesn't seem fair that Hitler could have that large an impact by himself, even though there's nothing remotely non-deterministic about that assertion.

4Sniffnoy
Perhapss something along the lines of "stability"? The idea being that small perturbations of input should lead to only small perturbations down the line. ("Stability" isn't really the proper word for that, but I'm not sure what is.)

Macroscopic determinism, i.e., the belief that an outcome was not sensitive to small thermal (never mind quantum) fluctuations. If I'm hungry and somebody offers me a tasty hamburger, it's macroscopically determined that I'll say yes in almost all Everett branches; if Zimbabwe starts printing more money, it's macroscopically determined that their inflation rates will rise further.

Yes, think about how none of us would ever have discovered Less Wrong if we never fucked around on the Internet.

This is not to say that we don't fuck around on the Internet more than we should, which I think I probably do and I wouldn't be surprised if most of you do as well.

Not critical to your point, but I can't stand this habitual exchange:

But there's a lot of small habits in everything we do, that we don't really notice. Necessary habits. When someone asks you how you are, the habitual answer is 'Fine, thank you,' or something similar. It's what people expect. The entire greeting ritual is habitualness, to the point that if you disrupt the greeting, it throws people off.

When people ask how I am, I want to give them information. I want to tell them, "Actually I've had a bad headache all day; and I'm underemployed r... (read more)

7Alicorn
It is not a scarce resource on the relevant scale. Water is valuable in the sense that you can do a thousand things, some essential, with it; this does not mean that flush toilets are an abomination.

It's about ten times easier to become vegetarian than it is to reduce your consumption of meat. Becoming vegetarian means refusing meat every time no matter what, and you can pretty much manage that from day one. Reducing your meat consumption means somehow judging how much meat you're eating and coming up with an idea of how low you want it to go, and pretty soon you're just fudging all the figures and eating as much as you were anyway.

Likewise, I tried for a long time to "reduce my soda drinking" and could not achieve this. Now I have switched to "sucralose-based sodas only" and I've been able to do it remarkably well.

For the most part I agree with this post, but I am not convinced that this is true:

Anyone can develop any “character trait.” The requirement is simply enough years of thoughts becoming words becoming actions becoming habit.

A lot of measured traits are extremely stable over lifespan (IQ, conscientiousness, etc.) and seem very difficult, if not impossible, to train. So the idea that someone can just get smarter through practice does not appear to be supported by the evidence.

0Swimmer963 (Miranda Dixon-Luinenburg)
I don't think most people would consider IQ a 'character trait'... However, that's a matter of terminology and doesn't negate your point. I agree that 'fluid intelligence' is probably relatively innate and would be hard to change (although there's some research that training tasks such as the dual n-back can have an effect.) Crystallized intelligence, as basically the sum of your knowledge and ability to apply it, can definitely be increased by practice. IQ in isolation strikes me as something that wouldn't matter as much as IQ and amount of experience and good work habits and openness to criticism and improvement. As for conscientiousness, I have no idea what kind of research has been done on its stability as a character trait, but I see no reason why someone who was aware enough to make a decision to become more conscientious wouldn't be able to train themselves in habits that would, at the very least, make them able to get more work done and appear harder-working to others.
Strange7210

I agree that women in the aggregate have worse employment prospects than men in the aggregate at present. I was specifically referring to never-married, childless women vs. never-married, childless men, which that report does not seem to address.

The answer should be obvious: Expected utility.

In practical terms, this means weighting according to severity, because the quantity of people affected is very close to equal. So we focus on the worst forms of oppression first, and then work our way up towards milder forms.

This in turn means that we should be focusing on genital mutilation and voting rights. (And things like Elevatorgate, for those of you who follow the atheist blogosphere, should obviously be on a far back burner.)

Because female circumcision is rare and illegal in developed nations?

There's obviously a female advantage here, at least in the Western world. Mutilating female genitals draws the appropriate outrage, while mutilating male genitals is ignored or even condoned. (I've seen people accused of "anti-Semitism" just for pointing out that male circumcision has virtually no actual medical benefits.)

wedrifid130

Mutilating female genitals draws the appropriate outrage, while mutilating male genitals is ignored or even condoned.

The mutilation of male genitals in question is ridiculous in itself but hardly equivalent to the kind of mutilation done to female genitals.

pnrjulius-40

Upvoted because it's a well-sourced and coherent argument.

Which is not to say that I agree with the conclusion. Okay, so there may be this effect of women being identified with their bodies.

But here's the thing: WE ARE OUR BODIES. We should be identifying with them, and if we're not, that's actually a very serious defect in our thinking (probably the defect that leads to such nonsense as dualism and religion).

Now, I guess you could say that maybe women are taught to care too much about physical appearance or something like that (they should care about othe... (read more)

I'm not sure I would call it "oppression", but it's clearly true that heterosexual men are by far the MOST controlled by restrictive gender norms. It is straight men who are most intensely shoehorned into this concept of "masculinity" that may or may not suit them, and their status is severely downgraded if they deviate in any way.

If you doubt this, imagine a straight man wearing eye shadow and a mini-skirt. Compare to a straight woman wearing a tuxedo.

See the difference?

I've always found that recommendations of what to do are much more useful than any kind of praise, reward, punishment, or criticism.

On the other hand, if everyone told you how to do everything, you might never learn the very important skill of teaching yourself to do things.

If that's the case (and it seems like it is), then reinforcing yourself is going to be almost impossible, because you will by definition know the reinforcement script.

0Caspian
Reinforcing effort only in combination with poor performance wasn't the intent. Pick a better criterion that you can reinforce with honest self-praise. You do need to start off with low enough standards so you can reward improvement from your initial level though.

Everyone getting an A isn't reinforcement. Reinforcement has to be conditional on something. If you give everyone who writes a long paper an A, that's reinforcing writing long papers. If you give everyone who writes a well-written paper an A, that's reinforcing well-written papers (and probably more what you want to do).

But if you just give everyone an A, that may be positive, but it simply isn't reinforcement.

So you're saying you think that while maybe typically happy people are more irrational, it's still possible to be rational and happy.

I guess I agree with that. But sometimes I feel like I may just hope this is true, and not actually have good evidence for it.

2azzu
I'm pretty rational and I chose to become happy, and now I feel happy most of the time. I'm continuously choosing to be happy. Idk if that's some valid evidence for you (or if you even care after 10 years lol), you'd have to believe me that I'm rational and that I'm actually happy, but there you go :D
-3DanielLC
I'm saying that the truth is not so horrifying that it will cause you to go into depression. If the only way to become rational involves depression, this just means that becoming rational sucks. It doesn't mean that the world sucks.

Makes sense from the corporation's perspective. But also kinda sounds like moral hazard to me.

Of course. That was the point. If you can make more bets than you can cover, and suffer no liability when you can't, you've got yourself a license to steal. And clearly the trader knew it.

Well, maybe. Depending on how much it costs to do that experimental treatment, compared to other things we could do with those resources.

(Actually a large part of the problem with rising medical costs in the developed world right now is precisely due to heavier use of extraordinary experimental treatments.)

Often it clearly isn't; so don't do that sort of research.

Don't spend $200 million trying to determine if there are a prime number of green rocks in Texas.

Though that's actually illegal, so you'd have to include the chance of getting caught.

The trick is to be able to tell the difference.

And what a trick it is!

This is why I have decided not to be an entrepreneur. All the studies say that your odds are just not good enough to be worth it.

1tlhonmey
The odds are long because all the obviously good ideas with no risk of failure are immediately snapped up by everyone. The key is to learn to spot those so you can move on them first, and also to keep a sane estimate with how much you're gambling vs the potential reward so that your net expected payout remains positive.
1themusicgod1
...and even if you are, people who are able to re-arrange the odds to their favour may end up crowding out the honest ones ;)

This makes perfect sense in terms of Bayesian reasoning. Unexpected evidence is much more powerful evidence that your model is defective.

If your model of the world predicted that the Catholic Church would never say this, well... your model is wrong in at leas that respect.

I don't think you're just rationalizing. I think this is exactly what the philosophy of mathematics needs in fact.

If we really understand the foundations of mathematics, Godel's theorems should seem to us, if not irrelevant, then perfectly reasonable---perhaps even trivially obvious (or at least trivially obvious in hindsight, which is of course not the same thing), the way that a lot of very well-understood things seem to us.

In my mind I've gotten fairly close to this point, so maybe this will help: By being inside the system, you're always going to get &... (read more)

1ec429
There is a further subtlety here. As I discussed in "Syntacticism", in Gödel's theorems number theory is in fact talking about "number theory", and we apply a metatheory to prove that "number theory is "number theory"", and think we've proved that number theory is "number theory". The answer I came to was to conclude that number theory isn't talking about anything (ie. ascription of semantics to mathematics does not reflect any underlying reality), it's just a set of symbols and rules for manipulating same, and that those symbols and rules together embody a Platonic object. Others may reach different conclusions.

Well, some rather serious physicists have considered the idea: tachyons

1wizzwizz4
That's imaginary mass implying superluminal velocity with real energy. Similar, but the other way around.
pnrjulius9-1

But we know that he was unusual: He has a very high IQ. This by itself raises the probability of being a math crank (it also raises the probability of being a mathematician of course).

It's similar to how our LW!Harry Potter has increased chances of being both hero and Dark Lord.

Actually, perpetual motion using vacuum energy might really be feasible, since the vacuum energy keeps expanding itself... at present, it looks sort of like a loophole in the laws of nature.

On the other hand, quantum gravity may close this loophole.

9DaFranker
Expansion of the original point: Finding various "loopholes" in the "laws of nature" that would allow FTL/perpetual-motion/infinite-scalable-free-energy/[insert-absurdly-surreal-technology-here]. I did that from age 16 (initially as a bored-by-this-math-class-let's-think-about-something-else tactic, gradually becoming more serious) onwards to around 19, when I finally realize that the "loopholes" aren't actually in the laws of nature, just in how shitty our (or in many of those cases, mine specifically) understanding of them is. If there exists any loophole in the laws of nature such that something impossible becomes possible through this loophole, then the map was upside-down, and it was a feature of the laws of nature all along; the laws of nature had always permitted it, we just didn't know how. The Universe doesn't rape itself.

I did exactly the same thing.

I also discovered shortly thereafter that I could force an n-coloring if I allowed discontinuous regions, which might seem trivial... except that real nations on real maps are sometimes discontinuous (Alaska, anyone?).

It looks like there's still some serious controversy on the issue.

But suppose for a moment that it's true: Suppose that depressed people really do have more accurate beliefs, and that this really is related to their depression.

What does this mean for rationality? Is it more rational to be delusional and happy or to be accurate and sad? Or can we show that even in light of this data there is a third option, to actually be accurate and happy?

7Tynam
It seems to me - and I'm a depressive - that even if depressed people really do have more accurate self-assessment, your third option is still the most likely. One recurrent theme on this site is that humans are prone to indulge cognitive biases which _make them happy_. We try to avoid the immediate hedonic penalty of admitting errors, forseeing mistakes, and so on. We judge by the availability heuristic, not by probability, when we imagine a happy result like winning the lottery. When I'm in a depressed state, I literally _can't_ imagine a happy result. I imagine that my all plans will fail and striving will be useless. This is still not a rational state of mind. It's not _inherently_ more accurate. But it's a state of mind that's inherently more resistant to certain specific errors - such as over-optimistic probability assessment or the planning fallacy. These errors of optimism are common, especially in self-assessment. Which might well be the reason depressed people make more accurate self-assessments - humans as a whole have a cognitive bias to personal overconfidence. - But it's also inherently more resistant to optimistic conclusions, _even when they're backed by the evidence_. (It's more rational to be accurate and sad than delusional and happy - because happiness based on delusion frequently crashes into real-world disasters, whereas if you're accurate and sad you can _use_ the accuracy to reduce the things you're sad about.)
2DanielLC
If you're an egoist, it's best to be delusional and happy. If you're not, the needs of others outweigh your own. Of course, even if depressed people are more accurate, that doesn't mean that they're more productive. Then again, they may be able to use their more accurate beliefs to find a better charity and make up the difference. Of course, you could just have a depressed philanthropist tell you where to donate.

Depressive realism is an incredibly, well, depressing fact about the world.

Is there something we're missing about it though? Is the world actually such that understanding it better makes you sad, or is it rather that for whatever reason sad people happen to be better at understanding the world?

And if it is in fact that understanding makes you sad... what does this mean for rationality?

5DanielLC
It's not that depressing. If it was lack of bias that caused the depression, that would be bad, but I'm pretty certain it's the other way around.

Actually, realizing this parallel causes me to be even more dubious of the efficient market hypothesis.

As compelling as it may sound when you say it, this line or reasoning plainly doesn't work in scientific truth... so why should it work in finance?

Behavioral finance gives us plenty of reasons to think that whole markets can remain radically inefficient for long periods of time. What this means for the individual investor, I'm not sure. But what it means for the efficient market hypothesis? Death.

1tlhonmey
The thing to keep in mind is that a perfectly efficient market is like an ideal gas.  It's a useful tool for thinking about what's likely to happen if you go messing with variables, but it basically never actually exists in nature. We use markets in real life not because they're perfect, but because, on average, they get a more correct answer more often and for less effort than any other system we know of. Could there be something better?  Of course.  We just haven't discovered it yet. Are there situations where, in hindsight, we can see that some other system would have performed better than a market?  Yup.  Hindsight's awesome that way. Can we predict well in advance when to use some other system?  Not particularly.  And if we could then that ability would become part of the market, so the market would still be likely to perform better when used globally.   So yeah, markets can remain horribly inefficient for a long time under some circumstances.  Just remember that the same things that keep a market inefficient will likely also cause mistakes by other methods of calculation.  So when you switch away from the market you're basically going double-or-nothing and the odds generally aren't in your favor.

I think majoritarianism is ultimately opposed to tsuyoku naritai, because it prevents us from ever advancing beyond what the majority believes. We rely upon others to do knowledge innovation for us, waiting for the whole society to, for example, believe in evolution, or understand calculus, before we will do so.

Though he might change his mind as we explained how to cure a whole bunch of diseases he thought were intractable.

2MugaSofer
Through a chronophone? Wouldn't that just repeat the nonsense ancient doctors believed, and cures to diseases he already knows how to deal with?
pnrjulius200

Actually I think I tend to do the opposite. I undervalue subgoals and then become unmotivated when I can't reach the ultimate goal directly.

E.g. I'm trying to get published. Book written, check. Query letters written, check. Queries sent to agents, check. All these are valuable subgoals. But they don't feel like progress, because I can't check off the book that says "book published".

5Elias
I hope that you are not still struggling with this, but for anyone else in this situation: I would think that you need to change the way you set your goals. There is loads of advice out there on this topic, but there's a few rules I can recall off the top of my head: * "If you formulate a goal, make it concrete, achievable, and make the path clear and if possible decrease the steps required." In your case, every one of the subgoals already had a lot of required actions, so the overarching goal of "publish a book" might be too broadly formulated. * "If at all possible don't use external markers for your goals." What apparently usually happens is that either you drop all your good behaviour once you cross the finish line, or your goal becomes/reveals itself to be unreachable and you feel like you can do nothing right (seriously, the extend to which this happens... incredible.), etc. * "Focus more on the trajectory than on the goal itself." Once you get there, you will want different things and what you have learned and acquired will just be normal. There is no permanent state of "achieving the goal", there is the path there, and then the path past it. Very roughly speaking. All the best.

I largely agree with you, but I think that there's something we as rationalists can realize about these disagreements, which helps us avoid many of the most mind-killing pitfalls.

You want to be right, not be perceived as right. What really matters, when the policies are made and people live and die, is who was actually right, not who people think is right. So the pressure to be right can be a good thing, if you leverage it properly into actually trying to get the truth. If you use it to dismiss and suppress everything that suggests you are wrong, that's no... (read more)

There is another way: Look really really hard with tools that would be expected to work. If you find something? Yay, your hypothesis is confirmed. If you don't? You'd better start doubting your hypothesis.

You already do this in many situations I'm sure. If someone said, "You have a million dollars!" and you looked in your pockets, your bank accounts, your stock accounts (if any), etc. and didn't find a million dollars in them (or collectively in all of them put together), you would be pretty well convinced that the million dollars you allegedly have doesn't exist. (In fact, depending on your current economic status you might have a very low prior in the first place; I know I would.)

That's a good point. And clearly court standards for evidence are not the same as Bayesian standards; in court lots of things don't count that should (like base rate probabilities), and some things count more than they should (like eyewitness testimony).

Load More