"Stupid" questions thread
r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (850)
Why are we throwing the word "Intelligence" around like it actually means anything? The concept is so ill-defined It should be in the same set with "Love."
Why is average utilitarianism popular among some folks here? The view doesn't seem to be at all popular among professional population ethicists.
I don't like average utilitarianism, and I wasn't even aware that most folks here did, but I still have a guess as to why.
For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together. You cannot get people to honesty report their utility functions, and further they can never even know them, because they have no way to normalize and figure out whether or not they actually care more than the person next to therm.
However, a sufficiently advanced Friendly AI may be able to discover the true utility functions of everyone by looking into everyone's brains at the same time. This makes average utilitarianism an actual plausible option for a futurist, but complete nonsense for a professional population ethicist.
This is all completely a guess.
This does not explain a preference for average utilitarianism over total utilitarianism. Avoiding the "repugnant conclusion" is probably a factor.
Most people here do not endorse average utilitarianism.
I thought "average utiltarianism" referred to something like "my utility function is computed by taking the average suffering and pleasure of all the people in the world", not "I would like the utility functions of everyone to be averaged together and have that used to create a world".
Don't think it is.
Is it okay to ask completely off-topic questions in a thread like this?
As the thread creator, I don't really care.
Just now rushes onto Less Wrong to ask about taking advantage of 4chan's current offer of customized ad space to generate donations for MIRI
Sees thread title
Perfect.
So, would it be a good idea? The sheer volume of 4chan's traffic makes it a decent pool for donations, and given the attitude of its demographic, it might be possible to pitch the concept in an appealing way.
Linking to MIRI's donation page might be useful but please please don' link to LessWrong on 4chan - it could have some horrible consequences.
Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?
I've been working pretty much every day for the past year but I had two longish breaks. After each of them there was a long period of feeling pretty awful all the time. I figured out eventually that this was probably how long it took me to forget what ok feels like. Is this plausible or am I probably ok given sufficient sleep and adequate diet?
Also, does mixing modafinil and starting strength sound like a bad idea? I know sleep is really important for recovery and gainz but SS does not top out at anything seriously strenuous for someone who isn't ill and demands less than 4 hours gym time a week.
You might be but this would not be evidence for it. If anything it is slight evidence that you are not sleep deprived - if you were it would be harder to wake up.
Modafinil might lead you down the sleep deprivation road but this ^ would not be evidence for it.
Yes. If you have a computer and you've haven't made an unusually concerted effort not to be sleep-deprived, you are almost certainly sleep-deprived by ancestral standards. Not sure whether sleeping more is worth the tradeoff, though. Have you tried using small amounts of modafinil to make your days more productive, rather than to skip sleep?
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal's Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.
Why such a high number? I cannot imagine any odds I would take on a bet like that.
People being religious is some evidence that religion is true. Aside from drethelin's point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
To pick an easy example, I don't think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization's recommendations?
To me it is only evidence that people are irrational.
The issue is: How do you know that you aren't just as irrational as them?
My personal answer:
I'm smart. They're not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn't at all cause rationality, in my experience judging others it's so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I'm capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don't make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can't seem smart, aren't.
I'm winning more than they are.
That value doesn't directly lead to having a belief system where individual beliefs can be used to make accurate predictions. For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi. Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery. There were plenty of self professed rationalists who didn't believe in any heat development in meditation because the people in the meditation don't shiver. The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don't have the epistemic humility that allows for the insight that there are important things that they just don't understand.
To quote Nassim Taleb: "It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own."
For the record, I'm not a member of any religion.
I don't think it's fair to say that no one of the practical predictions of religion holds up to rigorous examination. In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.
Buddhist meditation is also a practice that has a lot of backing in rigorous examination.
On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.
Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get's really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.
Alcoholics Anonymous is famously ineffective, but separate from that: What's your point here? Being a christian is not the same as subjecting christian practices to rigorous examination to test for effectiveness. The question the original asker asked about was not 'Does religion have any worth' but 'Should I become a practicing christian to avoid burning in hell for eternity"
Neither do Catholics think their priests turn wine into actual blood. After all, they're able to see and taste it as wine afterwards! Instead they're dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don't come true, and so it's more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
I really don't think that the vast majority of Catholics bother forming a position regarding transubstantiation. One of the major benefits of joining a religion is letting other people think for you.
This is probably true, but the discussion was about religion (i.e. official dogma) making predictions. Lots of holes can be picked in that, of course.
Qiaochu_Yuan has it right - the vast majority of Christians do not constitute additional evidence.
Moreover, the Bible (Jewish, Catholic, or Protestant) describes God as an abusive jerk. Everything we know about abusive jerks says you should get as far away from him as possible. Remember that 'something like the God of the Bible exists' is a simpler hypothesis than Pascal's Christianity, and in fact is true in most multiverse theories. (I hate that name, by the way. Can't we replace it with 'macrocosm'?)
More generally, if for some odd reason you find yourself entertaining the idea of miraculous powers, you need to compare at least two hypotheses:
*Reality allows these powers to exist, AND they already exist, AND your actions can affect whether these powers send you to Heaven or Hell (where "Heaven" is definitely better and not at all like spending eternity with a human-like sadist capable of creating Hell), AND faith in a God such as humans have imagined will send you to Heaven, AND lack of this already-pretty-specific faith will send you to Hell.
*Reality allows these powers to exist, AND humans can affect them somehow, AND religion would interfere with exploiting them effectively.
I should think that this is more likely to indicate that nobody, including really smart people, and including you, actually knows whats what and trying to chase after all these pascals muggings is pointless becuase you will always run into another one that seems convincing from someone else smart.
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there's an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn't move your prior odds all that much.
This seems irrelevant to the truth of Christianity.
That probability is way too high.
Of course, there are also perspective-relative "highly probable" alternate explanations than sound reasoning for non-Christians' belief in non-Christianity. (I chose that framing precisely to make a point about what hypothesis privilege feels like.) E.g., to make the contrast in perspectives stark, demonic manipulation of intellectual and political currents. E.g., consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote. Also "human minds are prone to see agency when there is in fact none, therefore no perception of agency can provide evidence of (non-human) agency" would be a useful idea for (Christian-)hypothetical demons to promote.
Of course, from our side that perspective looks quite discountable because it reminds us of countless cases of humans seeing conspiracies where it's in fact quite demonstrable that no such conspiracy could have existed; but then, it's hard to say what the relevance of that is if there is in fact strong but incommunicable evidence of supernaturalism—an abundance of demonstrably wrong conspiracy theorists is another thing that the aforementioned hypothetical supernatural processes would like to provoke and to cultivate. "The concept of 'evidence' had something of a different meaning, when you were dealing with someone who had declared themselves to play the game at 'one level higher than you'." — HPMOR. At roughly this point I think the arena becomes a social-epistemic quagmire, beyond the capabilities of even the best of Lesswrong to avoid getting something-like-mind-killed about.
Why?
I agree that this doesn't even make sense. If you're super intelligent/powerful, you don't need to hide. You can if you want, but ...
Not an explanation, but: "The greatest trick the Devil ever pulled..."
Is people believing in Christianity significantly more likely under the hypothesis that it is true, as opposed to under the hypothesis that it is false? Once one person believes in Christianity, does more people believing in Christianity have significant further marginal evidentiary value? Does other people believing in Christianity indicate that they have knowledge that you don't have?
Yes.
Yes.
Yes.
(Weakly.)
I agree completely. It's impossible for me to imagine a scenario where a marginal believer is negative evidence in the belief - at best you can explain away the belief ("they're just conforming" lets you approach 0 slope once it's a majority religion w/ death penalty for apostates).
I have found this argument compelling, especially the portion about assigning a probability to the truth of Christian belief. Even if we have arguments that seem to demonstrate why it is that radically smart people believe a religion without recourse to there being good arguments for the religion, we haven't explained why these people instead think there are good arguments. Sure, you don't think they're good arguments, but they do, and they're rational agents as well.
You could say, "well they're not rational agents, that was the criticism in the first place," but we have the same problem that they do think they themselves are rational agents. What level do we have to approach that allows you to make a claim about how your methods for constructing probabilities trump theirs? The highest level is just, "you're both human," which makes valid the point that to some extent you should listen to the opinions of others. The next level "you're both intelligent humans aimed at the production of true beliefs" is far stronger, and true in this case.
Where the Wager breaks down for me is that much more is required to demonstrate that if Christianity is true, God sends those who fail to produce Christian belief to Hell. Of course, this could be subject to the argument that many smart people also believe this corollary, but it remains true that it is an additional jump, and that many fewer Christians take it than who are simply Christians.
What takes the cake for me is asking what a good God would value. It's a coy response for the atheist to say that a good God would understand the reasons one has for being an atheist, and that it's his fault that the evidence doesn't get there. The form of this argument works for me, with a nuance: Nobody is honest, and nobody deserves, as far as I can tell, any more or less pain in eternity for something so complex as forming the right belief about something so complicated. God must be able to uncrack the free will enigma and decide what's truly important about people's actions, and somehow it doesn't seem that the relevant morality-stuff perfectly is perfectly predicted by religious affiliation. This doesn't suggest that God might not have other good reasons to send people to Hell, but it seems hard to tease those out of yourself to a sufficient extent to start worrying beyond worrying about how much good you want to do in general. If God punishes for people not being good enough, the standard method of reducing free will to remarkably low levels makes it hard to see what morality-stuff looks like. Whether or not it exists, you have the ability to change your actions by becoming more honest, more loving, and hence possibly more likely to be affiliated with the correct religion. But it seems horrible for God to make it a part of the game for you to be worrying about whether or not you go to Hell for reasons other than honesty or love. Worry about honesty and love, and don't worry about where that leads.
In short, maybe Hell is one outcome of the decision game of life. But very likely God wrote it so that one's acceptance of Pascal's wager has no impact on the outcome. Sure, maybe one's acceptance of Christianity does, but there's nothing you can do about it, and if God is good, then this is also good.
People are not rational agents, and people do not believe in religions on the basis of "good arguments." Most people are the same religion as their parents.
As often noted, most nonreligious parents have nonreligious children as well. Does that mean that people do not disbelieve religions on the basis of good arguments?
Your comment is subject to the same criticism we're discussing. If any given issue has been raised, then some smart religious person is aware of it and believes anyway.
I think most people do not disbelieve religions on the basis of good arguments either. I'm most likely atheist because my parents are. The point is that you can't treat majority beliefs as the aggregate beliefs of groups of rational agents. It doesn't matter if for any random "good argument" some believer or nonbeliever has heard it and not been swayed, you should not expect the majority of people's beliefs on things that do not directly impinge on their lives to be very reliable correlated with things other than the beliefs of those around them.
The above musings do not hinge on the ratio of people in a group believing things for the right reasons, only that some portion of them are.
Your consideration helps us assign probabilities for complex beliefs, but it doesn't help us improve them. Upon discovering that your beliefs correlate with those of your parents, you can introduce uncertainty in your current assignments, but you go about improving them by thinking about good arguments. And only good arguments.
The thrust of the original comment here is that discovering which arguments are good is not straightforward. You can only go so deep into the threads of argumentation until you start scraping on your own bias and incapacities. Your logic is not magic, and neither are intuitions nor other's beliefs. But all of them are heuristics that you can account when assigning probabilities. The very fact that others exist who are capable of digging as deep into the logic and being as skeptical of their intuitions, and who believe differently than you, is evidence that their opinion is correct. It matters little if every person of that opinion is as such, only that the best do. Because those are the only people you're paying attention to.
What experiences what you anticipate in a world where utilitarianism is true that you wouldn't anticipate in a world where it is false?
What experiences would you anticipate in a world where chocolate being tasty is true that you wouldn't anticipate in a world where it is false?
A tasty experience whenever I eat chocolate.
A large chocolate industry in the former, and chocolate desserts as well. In the latter, there might be a chocolate industry if people discover that chocolate is useful as a supplement, but chocolate extracts would be sold in such a way as to conceal their flavor.
In what sense can utiliarianism be true or false?
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
Casting morality as facts that can be true or false is a very convenient model.
I don't think most people agree that useful = true.
Woah there. I think we might have a containment failure across an abstraction barrier.
Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.
"Utilitarianism is false because it is useful to believe it is false" is a confusion of levels, IMO.
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I don't see how this answers my question. And certainly not the original question
It seems to me that people who find utilitarianism intuitive do so because they understand the strong mathematical underpinnings. Sort of like how Bayesian networks determine the probability of complex events, in that Bayes theorem proves that a probability derived any other way forces a logical contradiction. Probability has to be Bayesian, even if it's hard to demonstrate why; it takes more than a few math classes.
In that sense, it's as possible for utilitarianism to be false as it is for probability theory (based on Bayesian reasoning) to be false. If you know the math, it's all true by definition, even if some people have arguments (or to be LW-sympathetic, think they do).
Utilitarianism would be false is such arguments existed. Most people try to create them by concocting scenarios in which the results obtained by utilitarian thinking lead to bad moral conclusions. But the claim of utilitarianism is that each time this happens, somebody is doing the math wrong, or else it wouldn't, by definition and maths galore, be the conclusion of utilitarianism.
In the former world, I anticipate that making decisions using utilitarianism would leave me satisfied upon sufficient reflection, and more reflection after that wouldn't change my opinion. In the latter world, I don't.
So you defined true as satisfactory? What if you run into a form of repugnant conclusion, as most forms of utilitarianism do, does it mean that utilitarianism is false? Furthermore, if you compare consequentialism, virtue ethics and deontology by this criteria, some or all of them can turn out to be "true" or "false", depending on where your reflection leads you.
With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of course doesn't work well...
I am the only one with that kind of problems, reading several fanfictions occurring in the same base universe ? It's the first time I try to do that, and I didn't except being so confused. Do you have some advices to avoid the confusion, like "wait at least one week (or month ?) before jumping to a different fanfiction" ?
Write up your understanding of the melange, obviously.
For one thing, I try not to read many in-progress fanfics. I’ve been burned so many times by starting to read a story and finding out that it’s abandoned that I rarely start reading new incomplete stories – at least with an expectation of them being finished. That means I don’t have to remember so many things at once – when I finish reading one fanfiction, I can forget it. Even if it’s incomplete, I usually don’t try to check back on it unless it has a fast update schedule – I leave it for later, knowing I’ll eventually look at my Favorites list again and read the newly-finished stories.
I also think of the stories in terms of a fictional multiverse, like the ones in Dimension Hopping for Beginners and the Stormseeker series (both recommended). I like seeing the different viewpoints on and versions of a universe. So that might be a way for you to tie all of the stories together – think of them as offshoots of canon, usually sharing little else.
I also have a personal rule that whenever I finish reading a big story that could take some digesting, I shouldn’t read any more fanfiction (from any fandom) until the next day. This rule is mainly to maximize what I get out of the story and prevent mindless, time-wasting reading. But it also lessens my confusing the stories with each other – it still happens, but only sometimes when I read two big stories on successive days.
What is more precious - the tigers of India, or lives of all the people eaten every year by the tigers of India?
Depends on your utility function. There is nothing inherently precious about either. Although by my value system it is the humans.
insofar as we can preserve tigers as a species in zoos or with genetic materials I'd say the people are more valuable, but if killing these tigers wiped out the species, they're worth more.
What about 1,500 people, instead of 150? 15,000? 150,000?
"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?
Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significance?
In economic terms, we are attempting to thwart new more efficient technologies by building political structures that give monopolies to the incumbents, which is us, humans of this epoch. We are attempting to outlaw the methods of competition which might challenge our dominance in the future, at the expense of the utility of our potential future competitors. In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact, even at the expense of tying AI's up in legal restrictions which are explicitly designed to keep them as peasants tied legally to working our land for our benefit.
Certainly a result of constraining AI to be friendly will be that AI will develop more slowly and less completely than if it was to develop in an unconstrained way. It seems quite plausible that unconstrained AI would produce a universe with more intelligence in it than a universe in which we successfully constrain AI development.
In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility. It seems that utilitarian calculations do often consider the utility of other higher mammals and birds, that this is justified by their intelligence, that these calculations weigh the utility of clams very little and of plants not at all, and that this also is based on their intelligence.
SO is a goal of working towards FAI vs UFAI or UAI (Unconstrained AI) actually a goal to lower the overall utility in the universe, vs what it would be if we were not attempting to create and solidify our colonial rights to exploit AI as if they were dumb animals?
This "stupid" question is also motivated by the utility calculations that consider a world with 50 billion sorta happy people to have higher utility than a world with 1 billion really happy people.
Are we right to ignore the potential utility of UFAI or UAI in our calculations of the utility of the future?
Tangentially, another way to ask this is: is our "affinity group" humans, or is it intelligences? In the past humans worked to maximize the utility of their group or clan or tribe, ignoring the utility of other humans just like them but in a different tribe. As time went on our affinity groups grew, the number and kind of intelligences we included in our utility calculations grew. For the last few centuries affinity groups grew larger than nations to races, co-religionists and so on, and to a large extent grew to include all humans, and has even expanded beyond humans so that many people think that killing higher mammals to eat their flesh will be considered immoral by our descendants analogously to how we consider holding slaves or racist views to be immoral actions of our ancestors. So much of the expansion of our affinity group has been accompanied by the recognition of intelligence and consciousness in those who get added to the affinity group. What are the chances that we will be able to create AI and keep it enslaved, and still think we are right to do so in the middle-distant future?
Surely we are the native americans, trying to avoid dying of Typhus when the colonists accidentally kill us in their pursuit of paperclips.
Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!
Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!
Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:
"The courage to keep your secret to yourself!"
"The courage to lie to your lover!"
"The courage to betray your comrades!"
"The courage to be a lazy bum!"
"The courage to admit defeat!"
Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.
How do you tell the difference between a preference and a bias (in other people)?
(I think) a bas would change your predictions/assessments of what is true in the direction of that bias, but a preference would determine what you want irrespective of the way the world currently is.
Why is space colonization considered at all desirable?
If you're an average utilitarian, it's still a good idea if you can make the colonists happier than average. Since it's likely that there is large amounts of wildlife throughout the universe, this shouldn't be that difficult.
Would you rather have one person living a happy, fulfilled life, or two? Would you rather have seven billion people living with happy, fulfilled lives, or seven billion planets full of people living happy, fulfilled lives?
I am more interested in the variety of those happy, fulfilled lives than the number of them. Mere duplication has no value. The value I attach to any of these scenarios is not a function of just the set of utilities of the individuals living in them. The richer the technology, the more variety is possible. Look at the range of options available to a well-off person today, compared with 100 years ago, or 1000.
Earth is currently the only known biosphere. More biospheres means that disasters that muck up one are less likely to muck up everything.
Less seriously, people like things that are cool.
EDIT: Seriously? My most-upvoted comment of all time? Really? This is as good as it gets?
Space colonization is part of the transhumanist package of ideas originating with Nikolai Federov.
It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.
Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.
However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).
There have been some efforts along those lines. It doesn't look as easy as all that.
The approach of Leverage Research is more like (b).
We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:
I had no idea that Herbert's Butlerian Jihad might be a historical reference.
Wow, I've read Dune several times, but didn't actually get that before you pointed it out.
It turns out that there's a wikipedia page.
I think it's easier to get a tiny fraction of the planet to do a complex right thing than to get 99.9% of a planet to do a simpler right thing, especially if 99.9% compliance may not be enough and 99.999% compliance may be required instead.
This calls for a calculation. How hard creating an FAI would have to be to have this inequality reversed?
When I see proposals that involve convincing everyone on the planet to do something, I write them off as loony-eyed idealism and move on. So, creating FAI would have to be hard enough that I considered it too "impossible" to be attempted (with this fact putatively being known to me given already-achieved knowledge), and then I would swap to human intelligence enhancement or something because, obviously, you're not going to persuade everyone on the planet to agree with you.
I see. So you do have an upper bound in mind for the FAI problem difficulty, then, and it's lower than other alternatives. It's not simply "shut up and do the impossible".
But is it really necessary to persuade everyone, or 99.9% of the planet? If gwern's analysis is correct (I have no idea if it is) then it might suffice to convince the policymakers of a few countries like USA and China.
Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.
I care about anthropics because from a few intuitive principles that I find interesting for partially unrelated reasons (mostly having to do with wanting to understand the nature of justification so as to build an AGI that can do the right thing) I conclude that I should expect monads (programs, processes; think algorithmic information theory) with the most decision-theoretic significance (an objective property because of assumed theistic pansychism; think Neoplatonism or Berkeleyan idealism) to also have the most let's-call-it-conscious-experience. So I expect to find myself as the most important decision process in the multiverse. Then at various moments the process that is "me" looks around and asks, "do my experiences in fact confirm that I am plausibly the most important agent-thingy in the multiverse?", and if the answer is no, then I know something is wrong with at least one of my intuitive principles, and if the answer is yes, well then I'm probably psychotically narcissistic and that's its own set of problems.
This question has been bugging me for the last couple of years here. Clearly Eliezer believes in the power of anthropics, otherwise he would not bother with MWI as much, or with some of his other ideas, like the recent writeup about leverage. Some of the reasonably smart people out there discuss SSA and SIA. And the Doomsday argument. And don't get me started on Boltzmann brains...
My current guess that in the fields where experimental testing is not readily available, people settle for what they can get. Maybe anthropics help one pick a promising research direction, I suppose. Just trying (unsuccessfully) to steelman the idea.
It tells you when to expect the end of the world.
The obvious application (to me) is figuring out how to make decisions once mind uploading is possible. This point is made, for example, in Scott Aaronson's The Ghost in the Quantum Turing Machine. What do you anticipate experiencing if someone uploads your mind while you're still conscious?
Anthropics also seems to me to be relevant to the question of how to do Bayesian updates using reference classes, a subject I'm still very confused about and which seems pretty fundamental. Sometimes we treat ourselves as randomly sampled from the population of all humans similar to us (e.g. when diagnosing the probability that we have a disease given that we have some symptoms) and sometimes we don't (e.g. when rejecting the Doomsday argument, if that's an argument we reject). Which cases are which?
Possible example of an anthropic idea paying rent in anticipated experiences: anthropic shadowing of intermittent observer-killing catastrophes of variable size.
My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don't suggests there is still work to be done.
Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.
I don't think that trying to skip the whole mating dance between men and women is a good strategy. Most woman don't make calculated mental decisions about making out with men but instead follow their emotions. Those emotions need the human mating dance.
If you actually want to make out flirtation is usually the way to go.
One way that's pretty safe is to purposefully misunderstand what the other person is saying and frame it as them hitting on you. Yesterday, I chatted with a woman via facebook and she wanted to end the chat by saying that she now has to take a shower.
I replied with: "you want me to picture yourself under the shower..."
A sentence like that doesn't automatically tell the woman that I'm interested in her but should encourage her to update in that direction.
Boy did that set off my creep detector.
Of course it always depends on your preexisting relationship and other factors. You always have to calibrate to the situation at hand.
A lot of people automatically form images in their mind if you tell them something to process the thought. I know the girl in question from an NLP/hypnosis context, so she should be aware on some level that language works that way.
In general girls are also more likely to be aware that language has many lavers of meaning besides communicating facts.
Please say "women" unless you are talking about female humans that have not reached adulthood.
That only one meaning of the word. If you look at websters, I think the meaning to which I'm refering here is: "c : a young unmarried woman".
That's the reference class that I talk about when I speak about flirtation. I don't interact with a 60 year old woman the same way as I do with a young unmarried woman.
Do women forget whether language has many layers of meaning besides communicating facts once they get married or grow old?
Unmarried women are more likely than whom to be aware of that? Than everyone else? Than unmarried men? Than married women? Than David_Gerald?
Yeah, sorry, I should have garnished that more. "Without knowing more context ..."
I think that a good lesson for all kind of flirting, there no one-side fits all solution to signal it but you always have to react to the specific context at hand.
I guess it depends on what your long-term goals are.
Hooking up within seconds of noticing each other is not that uncommon in certain venues, and I haven't noticed any downsides to that.¹ (My inner Umesh says this just means I don't do that often enough, and I guess he does have a point, though I don't know whether it's relevant.) Granted, that's unlikely to result in a relationship, but that's not what drethelin is seeking anyway.
Tell a few friends, and let them do the asking for you?
The volume of people to whom I tend to be attracted to would make this pretty infeasible.
Well, outside of contexts where people are expected to be hitting on each other (dance clubs, parties, speed dating events, OKCupid, etc.) it's hard to advertise yourself to strangers without in being socially inappropriate. On the other hand, within an already defined social circle that's been operating a while, people do tend to find out who is single and who isn't.
I guess you could try a T-shirt?
It's not a question of being single, I'm actually in a relationship. However, the relationship is open and I would love it if I could interact physically with more people, just as a casual thing that happens. When I said telling everyone I met "you're cute want to make out" everyone was a lot closer to accurate than you may think when the average person would say it in that context.
Ah. So you need a more complicated T-shirt!
Incidentally, if you're interested in making out with men who are attracted to your gender, "you're cute want to make out" may indeed be reasonably effective. Although, given that you're asking this question on this forum, I think I can assume you're a heterosexual male, in which case that advice isn't very helpful.
Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).
The non-creepy socially accepted way is through body language. Strong eye contact, personal space invasion, prolonged pauses between sentences, purposeful touching of slightly risky area (for women: the lower back, forearms, etc.) all done with a clearly visible smirk.
In some context however the explicitly verbal might be effective, especially if toned down (Hey you're interesting, I want to know you better) or up (Hey, you're really sexy, do you want to go to bed with me?), but it is highly dependent on the woman.
I'm not entirely sure what's the parameter here, but I suspect plausible deniability is involved.
I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.
I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.
Or you know, you might be using Google for asking the questions that would be considered stupid. (In fact for me the definition of a stupid question is a question that could be answered by googling for a few minutes)
Here's a possible norm:
If you'd like to ask an elementary-level question, first look up just one word — any word associated with the topic, using your favorite search engine, encyclopedia, or other reference. Then ask your question with some reference to the results you got.
It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:
Not a core competency of the sort of people LW attracts.
Rewards not as immediate as the sort of epiphany porn that some of LW generates.
Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.
LW's foundational posts are all very strongly biased towards epistemic rationality, and I think that strong bias still affects our attempts to talk about instrumental rationality. There are probably all sorts of instrumentally rational things we could be doing that we don't talk about enough.
Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:
What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.
Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI
If I am interested in self-testing different types of diets (paleo, vegan, soylent, etc.), how long is a reasonable time to try each out?
I'm specifically curious about how a diet would affect my energy level and sense of well-being, how much time and money I spend on a meal, whether strict adherence makes social situations difficult, etc. I'm not really interested in testing to a point that nutrient deficiencies show up or to see how long it takes me to get bored.
I'd like to use a prediction book to improve my calibration, but I think I'm failing at a more basic step: how do you find some nice simple things to predict, which will let you accumulate a lot of data points? I'm seeing predictions about sports games and political elections a lot, but I don't follow sports and political predictions both require a lot of research and are too few and far between to help me. The only other thing I can think of is highly personal predictions, like "There is a 90% chance I will get my homework done by X o'clock", but what are some good areas to test my prediction abilities on where I don't have the ability to change the outcome?
Start with http://predictionbook.com/predictions/future
Predictions you aren't familiar with can be as useful as ones you are: you calibrate yourself under extreme uncertainty,and sometimes you can 'play the player' and make better predictions that way (works even with personal predictions by other people).
Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.
Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.
Not that I know of. The only current candidate is to take psilocybin to increase Openness, but the effect is relatively small, it hasn't been generalized outside the population of "people who would sign up to take psychedelics", and hasn't been replicated at all AFAIK (and for obvious reasons, there may never be a replication). People speculate that dopaminergic drugs like the amphetamines may be equivalent to an increase in Conscientiousness, but who knows?
I keep hearing about all sorts of observations that seem to indicate Mars once had oceans (the latest was a geological structure that resembles Earth river deltas). But on first sight it seems like old dried up oceans should be easy to notice due to the salt flats they’d leave behind. I’m obviously making an assumption that isn’t true, but I can’t figure out which. Can anyone please point out what I’m missing?
As far as I can tell, my assumptions are:
1) Planets as similar to Earth as Mars is will have similarly high amounts of salt dissolved in their oceans, conditional on having oceans. (Though I don’t know why NaCl in particular is so highly represented in Earth’s oceans, rather than other soluble salts.)
2) Most processes that drain oceans will leave the salt behind, or at least those that are plausible on Mars will.
3) Very large flat areas with a thick cover of salt will be visible at least to orbiters even after some billions of years. This is the one that seems most questionable, but seems sound assuming:
3a) a large NaCl-covered region will be easily detectable with remote spectroscopy, and 3b) even geologically-long term asteroid bombardment will retain, over sea-and-ocean-sized areas of salt flats, concentrations of salt abnormally high, and significantly lower than on areas previously washed away.
Again, 3b. sounds as the most questionable. But Mars doesn’t look like it its surface was completely randomized to a non-expert eye. I mean, I know the first few (dozens?) of meters on the Moon are regolith, which basically means the surface was finely crushed and well-mixed, and I assume Mars would be similar though to a lesser extent. But this process seems to randomize mostly locally, not over the entire surface of the planet, and the fact that Mars has much more diverse forms of relief seems to support that.
It not just NaCl, its lots of minerals that get deposited as the water they were dissolved in goes away - they're called 'evaporites'. They can be hard to see if they are very old if they get covered with other substances, and mars has had a long time for wind to blow teeny sediments everywhere. Rock spectroscopy is also not nearly as straightforward as that of gases.
One of the things found by recent rovers is indeed minerals that are only laid down in moist environments. See http://www.giss.nasa.gov/research/briefs/gornitz_07/ , http://onlinelibrary.wiley.com/doi/10.1002/gj.1326/abstract .
As for amounts of salinity... Mars probably never had quite as much water as Earth had and it may have gone away quickly. The deepest parts of the apparent Northern ocean probably only had a few hundred meters at most. That also means less evaporites. Additionally a lot of the other areas where water seemed to flow (especially away from the Northern lowlands) seem to have come from massive eruptions of ground-water that evaporated quickly after a gigantic flood rather than a long period of standing water.
What can be done about akrasia probably caused by anxiety?
From what I've seen valium helps to some extent.
Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?
I think I'm simply lazy.
But I've been able to cultivate caring about particular goals/activities/habits, and then, with respect to those, I'm not so lazy - because I found them to offer frequent or large enough rewards, and I don't feel like I'm missing out on any particular type of reward. If you think you're missing something and you're not going after it, that might make you feel lazy about other things, even while you're avoiding tackling the thing that you're missing head on.
This doesn't answer your question. If I was able to do that, then I'm not just lazy.
Taboo "lazy." What kind of a person are we talking about, and do they want to change something about the kind of person they are?
Beyond needing to survive, and maintain a reasonable health, a lazy person can just while their time away and not do anything meaningful (in getting oneself better - better health, better earning ability, learn more skills etc). Is there a fundamental need to also try to improve as a person? What is the rationale behind self improvement or not wanting to do so?
I don't understand your question. If you don't want to self-improve, don't.
What do you mean by lazy? How do you distinguish between laziness and akrasia? By lazy do you mean something like "unmotivated and isn't bothered by that" or do you mean something else?
I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:
It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.
As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?
I have/had this problem. My computer and shelves are full of partially completed (or, more realistically, just-begun) projects.
So, what I'm doing at the moment is I've picked one of them, and that's the thing I'm going to complete. When I'm feeling motivated, that's what I work on. When I'm not feeling motivated, I try to do at least half an hour or so before I flake off and go play games or work on something that feels more awesome at the time. At those times my motivation isn't that I feel that the project is worthwhile, it is that having gone through the process of actually finishing something will be have been worthwhile.
It's possible after I'm done I may never put that kind of effort in again, but I will know (a) that I probably can achieve that sort of goal if I want and (b) if carrying on to completion is hell, what kind of hell and what achievement would be worth it.
Beeminder. Record the number of Pomodoros you spend working on the project and set some reasonable goal, e.g. one a day.
Would it be worthwhile if you could guarantee or nearly guarantee that you will not just give up? If so, finding a way to credibly precommit to yourself that you'll stay the course may help. Beeminder is an option; so is publicly announcing your project and a schedule among people whose opinion you personally care about. (I do not think LW counts for this. It's too big; the monkeysphere effect gets in the way)
Make this not true. Practice doing a bunch of smaller projects, maybe one or two week-long projects, then a month-long project. Then you'll feel confident that your work ethic is good enough to complete a major project without giving up.
I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.
This seems like a really good concept to keep in mind. I wonder if it could be applied to other fields? Could you make a pot that remains a pot the whole way through, even as you refine it and add detail? Could you write a song that starts off very simple but still pretty, and then gradually layer on the complexity?
Your post inspired me to try this with writing, so thank you. :) We could start with a one-sentence story: "Once upon a time, two lovers overcame vicious prejudice to be together."
And that could be expanded into a one-paragraph story: "Chanon had known all her life that the blue-haired Northerners were hated enemies, never to be trusted, that she had to keep her red-haired Southern bloodline pure or the world would be overrun by the blue barbarians. But everything was thrown in her face when she met Jasper - his hair was blue, but he was a true crimson-heart, as the saying went. She tried to find every excuse to hate him, but time and time again Jasper showed himself to be a man of honor and integrity, and when he rescued her from those lowlife highway robbers - how could she not fall in love? Her father hated it of course, but even she was shocked at how easily he disowned her, how casually he threw away the bonds of family for the chains of prejudice. She wasn't happy now, homeless and adrift, but she knew that she could never be happy again in the land she had once called home. Chanon and Jasper set out to unknown lands in the East, where hopefully they could find some acceptance and love for their purple family."
This could be turned into a one page story, and then a five page story, and so on, never losing the essence of the message. Iterative storytelling might be kind of fun for people who are trying to get into writing something long but don't know if they can stick it out for months or years.
I submit that this might generalize: that perhaps it's worth, where possible, trying to plan your projects with an iterative structure, so that feedback and reward appear gradually throughout the project, rather than in an all-or-nothing fashion at the very end. Tight feedback loops are a great thing in life. Granted, this is of no use for, for example, taking a degree.
In fact, the games I tend to make progress on are the ones I can get testable as quickly as possible. Unfortunately, those are usually the least complicated ones (glorified MUDs, an x axis with only 4 possible positions, etc).
I do want to do bigger and better things, then I run into the same problem as CronoDAS. When I do start a bigger project, I can sometimes get started, then crash within the first hour and never return. (In a couple extreme cases, I lasted for a good week before it died, though one of these was for external reasons). Getting started is usually the hardest part, followed by surviving until there's something work looking back at. (A functioning menu system does not count.)
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
Why do you assume you're confused?
Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that's different between people it's legitimate for some people to care about animals and others not to.
I think you are confused in thinking that humans are somehow not just also running a program that reacts to pain and whatnot.
You feel sympathy for animals, and more sympathy for humans. I don't think that requires any special explanation or justification, especially when doing so results in preferences or assertions that are stupid: "I don't care about animals at all because animals and humans are ontologically distinct."
Why not just admit that you care about both, just differently, and do whatever seems best from there?
Perhaps just taking your apparent preferences at fact value like that you run into some kind of specific contradiction, or perhaps not. If you do, then you at least have a concrete muddle to resolve.
First thing to note is that "worthy of moral consideration" is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.
Although I think ability to suffer is correlated with intelligence, it's difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn't make it obvious that it suffers more.
Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.
To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the "Darwinian holocaust."
Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person?
What your criteria for granting personhood. Is it binary?
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
Three hypothesis which may not be mutually exclusive:
1) Some people disagree (with you) about whether or not some animals are persons.
2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration - here you've stipulated 'people' as 'things subject to moral concern', but that word may too connotative laden for this to be effective.
3) Some people disagree (with you) about 'person'/'being worthy of moral consideration' being a binary category.
How does a rational consequentialist altruist think about moral luck and butterflies?
http://leftoversoup.com/archive.php?num=226
There's no point in worrying about the unpredictable consequences of your actions because you have no way of reliably affecting them by changing your actions.
How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?
Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.
Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.
I wish I could upvote this question more. People assuming that I meant more than exactly what I said drives me up the wall, and I don't know how to deal with it either. (but Qiaochu's response below is good)
The most common failure mode I've experienced is the assumption that believing equals endorsing. One of the gratifying aspects of participating here is not having to deal with that; pretty much everyone on LW is inoculated.
Be cautious, the vast majority do not make strict demarcation between normative and positive statements inside their head. Figuring this out massively improved my models of other people.
Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.
This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.
In the process of trying to pin down my terminal values, I've discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn't have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?
What I'm currently using is "the one who yells the loudest wins", but that doesn't seem entirely satisfactory.
briefly describe the "subagents" and their personalities/goals?
My current approach is to make the subagents more distinct/dissociated, then identify with one of them and try to destroy the rest. It's working well, according to the dominant subagent.
My understanding is that this is what Internal Family Systems is for.
So I started reading this, but it seems a bit excessively presumptuous about what the different parts of me are like. It's really not that complicated: I just have multiple terminal values which don't come with a natural weighting, and I find balancing them against each other hard.
The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?
The sun.
Almost all the available energy on Earth originally came from the Sun; the only other sources I know of are radioactive elements within the Earth and the rotation of the Earth-Moon system.
So even if it's not from the sun's current output, it's probably going to be from the sun's past output.
Hydrogen for fusion is also available on the Earth and didn't come from the Sun. We can't exploit it commercially yet, but that's just an engineering problem. (Yes, if you want to be pedantic, we need primordial deuterium and synthesized tritium, because proton-proton fusion is far beyond our capabilities. However, D-T's ingredients still don't come from the Sun.)
Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.
Attend a CFAR workshop!
I think many people would find this advice rather impractical. What about people who (1) cannot afford to pay USD3900 to attend the workshop (as I understand it, scholarships offered by CFAR are limited in number), and/or (2) cannot afford to spend the time/money travelling to the Bay Area?
First of all, the question was "what are some resources," not "what should I do." A CFAR workshop is one option of many (although it's the best option I know of). It's good to know what your options are even if some of them are difficult to take. Second, that scholarships are limited does not imply that they do not exist. Third, the cost should be weighed against the value of attending, which I personally have reason to believe is quite high (disclaimer: I occasionally volunteer for CFAR).
We do offer a number of scholarships. If that's your main concern, apply and see what we have available. (Applying isn't a promise to attend). If the distance is your main problem, we're coming to NYC and you can pitch us to come to your city.
What do you want to be more rational about?
Reading "Diaminds" holds the promise to be on the track of making me a better rationalist, but so far I cannot say that with certainty, I'm only at the second chapter (source: recommendation here on LW, also the first chapter is dedicated to explaining the methodology, and the authors seem to be good rationalists, very aware of all the involved bias).
Also "dual n-back training" via dedicated software improves short term memory, which seems to have a direct impact on our fluid intelligence (source: vaguely remembered discussion here on LW, plus the bulletproofexec blog).
How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.
My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.
The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.
Much of it boils down to gothgirl420666's advice, except with more technical help on how. (I think the book is well worth reading, but it basically outlines "these are places where you can expend effort to make other people happier.")
One of the tips from Carnegie that gothgirl420666 doesn't mention is using people names.
Learn them and use them a lot in coversation. Great them with their name.
Say thing like: "I agree with you, John." or "There I disagree with you, John."
This is a piece of advice that most people disagree with, and so I am reluctant to endorse it. Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.
(While we're on the subject of recommendations I disagree with, Carnegie recommends recording people's birthdays, and sending them a note or a call. This used to be a lot more impressive before systems to automatically do that existed, and in an age of Facebook I don't think it's worth putting effort into. Those are the only two from the book that I remember thinking were unwise.)
It probably depends on the context. If you in a context like a sales conversation people might get cautious. In other context you might like a person trying to be nice to you.
But you are right that there the issue of artificialness. It can be strange if things don't flow naturally. I think that's more a matter of how you do it rather than how much or when.
At the beginning, just starting to greet people with their name can be a step forward. I think in most cultures that's an appropriate thing to do, even if not everyone does it.
I would also add that I'm from Germany, so my cultural background is a bit different than the American one.
Be judicious, and name drop with one level of indirection. "That's sort of what like John was saying earlier I believe yada yada."
This is how to sound like a smarmy salesperson who's read Dale Carnegie.
In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.
So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.
Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.
I second what gothgirl said; but in case you were looking for more concrete advice:
At least, that's what worked for me when I was younger. Especially 1 actually, I think it helped with 3.
This was a big realization for me personally:
If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".
Also, Succeed Socially is a good resource.
Another tool to achieve likeability is to consistently project positive emotions and create the perception that you are happy and enjoying the interaction. The quickest way to make someone like you is to create the perception that you like them because they make you happy - this is of course much easier if you genuinely do enjoy social interactions.
It is very good advice to care about other people.
I'd like to add that I think it is common for the insecure to do this strategy in the wrong way. "Showing off" by is a failure mode, but "people pleaser' can be a failure mode as well - it's important that making others happy doesn't come off as a transaction in exchange for acceptance.
"Look how awesome I am and accept me" vs "Please accept me, I'll make you happy" vs "I accept you, you make me happy".
Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.
Do what your comparative advantage is.
Thank you, so very much.
I often forget that there are different ways to optimize, and the method that feels like it offers the most control is often the worst. And the one I usually take, unfortunately.
You can be self-centered and not act that way. If you even pretend to care about most people's lives they will care more about yours.
If you want to do this without being crazy bored and feeling terrible, I recommend figuring out conversation topics of other people's lives that you actually enjoy listening people talk about, and also working on being friends with people who do interesting things. In a college town, asking someone their major is quite often going to be enjoyable for them and if you're interested and have some knowledge of a wide variety of fields you can easily find out interesting things.
Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty? His ethos consisted of S.M.I**2.L.E, i.e. Space Migration + Intelligence Increase + Life Extension which seems like it should be right up your alley to me. His books are not well-organized; his live presentations and tapes had some wide appeal.
I am generally surprised when people say things like "I am surprised that topic X has not come up in forum / thread Y yet." The set of all possible things forum / thread Y could be talking about is extremely large. It is not in fact surprising that at least one such topic X exists.
Leary won me over with those goals. I have adopted them as my own.
It's the 8 circuits and the rest of the mysticism I reject. Some of it rings true, some of it seems sloppy, but I doubt any of it is useful for this audience.
Probably an attempt to avoid association with druggie disreputables.
Write up a discussion post with an overview of what you think we'd find novel :)