This reminds me place premium, an interesting concept that someone doing the same job in one country can earn more than in another. Though we are talking about some kid who can't even get a job in the first place, this concept works well.
For example if a homogenous region such as country, city, or even suburb, has automated to such a degree that menial jobs are few. Has attracted the best people, and the best people to serve the best people. Such a region has 'place premium' as the top creative jobs, programming, finance, design work, etc, pay extremely we...
Plenty of foods available today not available to our ancestors, such as semi-dwarf wheat.
Could be that 'use 75th' only had the right information and mental algorithms to produce the correct prediction in this one case. Other cases 'user 75th' might not have passed a sufficient threshold of probability to spout out a prediction.
Please label me as user 2nd when it comes to predictions of 'user 75th' 's predictive powers.
Being happy is a higher order goal than becoming attractive correct? How about picking up meditation instead? You shouldn't need to rely on anyone but yourself to be a happy person.
Here's some simple instructions to get you started. If interested, google "Progressive Stages of Meditation in Plain English" for more detailed instructions.
To the degree that money is used as a store of value, the money supply available for 'positive-sum' trades decreases. Let us say that the supply of goods and services on the market stays the same, then with less money available to potentially purchase theses goods and services, the price of the goods and services decreases; microeconomics supply and demand curve. This incentivizes people who are not holding money as a store of value to participate in more positive-sum trades.
Of course, people might end up taking their store-of-value money and investing it, allowing the creation of capital goods that make more efficient production possible. But that's another story.
Better to think of ways to not spend money than think of ways to keep on living relying on other peoples' money.
Better to think of ways to not spend money than think of ways to keep on living relying on other peoples' money.
You don't get rich that way, though. Sure, you can accumulate a comfortable amount of low-grade wealth, but all the real games are played with other people's money. The only difference between B_For_Bandana's trick and the typical externalities exploited by your average high roller is the number of zeros involved in the figures.
Intriguing, actual paraphrasing here of a US "The Surgeon General"? I can imagine it is something someone in high office might say.
Accessing long term memory appears to be a reconstructive process, which additionally results in accessed memories becoming fragile again; this is what I believe is occurring here. The learned aversion is reconstructed and as then susceptible to damage much more than other non-recently accessed LTM. Consider that the drug didn't destroy ALL of the mice's (fear?) memories, only that which was most recently accessed.
So no worries to cryonics!
I can only give you one upvote, so please take my comment as a second.
Lets be honest about 'demonstrating rationality' here. If your goals are to have much more romping in the bedroom, they have done well here. However many of these techniques speak to me of cults, the ones with the leader getting all the brainwashed girls.
A much better sign of rationality is to have success in career, in money, in fame -- to be Successful. Not to just have more fun. Being successful hasn't been much demonstrated, though I am hopeful still.
If your goals are to have much more romping in the bedroom, they have done well here. However many of these techniques speak to me of cults, the ones with the leader getting all the brainwashed girls.
The irony is that I recall a few years ago reading someone criticizing LWers to the effect that 'I would be more impressed by their so-called rationality if they were losing their virginity or getting laid more, than the stuff they focus on'. So, the NYCers are apparently doing just that and the response is this?
(Truly, damned if you do and damned if you don't.)
To be honest, as a long term supporter of SIAI, this sort of social experimentation seems like a serious political blunder. I personally have no problem with finding new (or not part of current western culture) techniques of... social interactions... if you believe it will make yourself and others 'better' for some definition of better.
But if you are serious in actually getting the world behind the movement, this is Bad. "Why should I believe you when you seem to be amoral?". I have more arguments on this matter but they are easy to generate any...
"Why should I believe you when you seem to be amoral?"
If we took this argument seriously, we (at least those of us in the United States) would have to pretend to be Christians, too.
Optimizing your life to minimize the worst that Mrs Grundy can say about you is a losing proposition — even if Mrs Grundy is a trendy New York gossip columnist rather than a curtain-twitching busybody neighbor.
With Unbreakable Vows, the... arbitrator?... sacrifices a portion of their magic permanently yes? One issue is that, after you die you might need that magic for something, like the more magic you have the more pleasant (or less!) magically created heaven is. In any case, even if magical society was fine with sacrifices, they might reason thus, and not use unbreakable vows. Such a society would make investigation (magical!) into potential afterlife a top priority, so lack of use of such a ritual might be compensated by finding out there is a heaven (or hell).
Clearly if you see larger costs as you age, then the incorrect course of action is to simply do nothing and find when you are old, you have no money to pay for the policy. If you don't want to spend a large amount when you are old, then save now. Perhaps if you save/invest enough, you will have enough money to simply by a cryonics policy directly.
Other than the mass suicides...
And including the mass suicides? remember that in this story, 6 billion people become 1 in a million, and over 25% of people died in this branch of the story. Destroying Huygens resulted in 15 billion deaths.
As they say, shut up and multiply.
At first, I thought that making a new convention is the wrong way to go about it. How many conventions should we need to remember then? making new conventions all over the place for LWer's will be too difficult, too many different social rules to juggle.
For example, in such a situation, as in asking a person out, you would need to think about the LW community conventions and then normal conventions when deciding actions. But then, you couldn't do better unless you allow for change.
If a community is to be truly made, perhaps a set of conventions can be con...
The problem isn't in remembering social conventions, humans naturally do it and you're using oodles of them now.
If there is a problem, it is in consciously calling for the new social convention, as it's the less common way they form. I don't think there's anything wrong here, though.
Are you going to kill yourself now? given that you are only living because you know someday you will be alive and Cowell will not be. Because not signing up for cryonics is saying that you don't want to live for longer than ~90 years :)
Most interesting! I would also recommend CompSci 111 even if you are skilled with computers. It introduces you to a wide range of skills.
You might even bump into me in the corridor.
I noticed. I'll be setting up a new meet up soon due to someone else requesting it. Auckland is positively on fire with rationality it seems! bring water buckets.
You are doing computer science now? that's most interesting. Are you taking any stage 1 compsci papers this semester?
Yes, the refined carbohydrates are the real killer here. Eat as much meat as you want but no more white bread!
The complete notes are a fantastic summary.
I put in ~1000 or so over a few months. For a better world!
I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you >asked me my favorite food, there are dozens of things I would say before "Pringles". >Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. >But once I've had that first chip, my motivation for a second chip goes through the >roof, without my subjective assessment of how tasty Pringles are changing one bit.
What is missing from this is the effort (which eats up the limited willpower budget) required to get the s...
I am also a New Zealander, AND I am signed up with Cryonics Institute. You might be interested in contacting the Cryonics Association of Australasia but I'm sure there is no actual suspension and storage nearby.
Besides you are missing the main point, if you don't sign up now and you die tomorrow, you are annihilated - no questions asked. I would be wary of this question as it can be an excuse to not sign up.
A turn out of 3 including myself, which is quite a success for a small place such as Auckland. We agreed that in mid December we should meet again. So for anyone who considered coming but did not, please come next time!; these meet-ups are excellent motivators for studying rationality.
Automatically, If I did it by hand, it would have looked nicer. I'm working on this project again, so I hope to have some much more user friendly things coded soon. Ill make what you mentioned as well.
Agreed, pain overwhelming your entire thoughts is too extreme, though understandable how it evolved this way.
In Getting Things Done, after the first step of simply writing down each task you want to accomplish (can be of any level of difficult and time), and then you do a seperate processing step after that.
That is when you decide how long each task takes, and if it takes less than 5 minutes you do it now. When you get into the GTD system of life organization, trivial impetuses you put down in the initial collection phase, and when you get around to processing them, you have habits that say "do task now if takes less than 5 minutes". GTD is (apparently, I tried to get it working for me but to little success so far) a life changing thing.
You could make voting a post mandatory to comment on that post, so to submit a comment you get prompted to vote it up or down (or maybe neutral)
Or maybe just by having the vote up/down/neutral buttons next to the comment submit button, right in peoples faces, would make them more likely to vote.
Neat! I did not think of generalizing my arguments; we could call it the resource commit fallacy. We need techniques to help us solve this problem.
One strategy that comes to mind is precommitting to allocate the resources to the most efficient place before you optimize yourself. Then taking a bet with someone that if you fail to follow through with your commitment, you take a penalty of resources.
I reiterated the cost analysis from my perspective because it is essential to my argument of why people see cryonics is super beneficial, but still fail to do anything about it. All the while sitting on a potential gold mine of money they are wasting away which could be used to get cryonics!
I don't even drink coffee so I'm going to have to think hard on what part of my life I should optimize. I picked it because many people drink starbucks coffee (we even have then in my little island country) and I presume you can do it cheaper if you buy your own.
Well, there are a great many factors I am glossing over, but if you are pessimistic about cryonics to that degree, you are probably pessimistic about other future technologies like medical and anti-aging technologies. You will die eventually unless actuarial escape velocity occurs when you are alive. Assuming this is not the case, if you don't have cryonics you wont take advantage of the future indefinite lifespans humans will possess, old age will kill you.
You could very well be worth more than 200 million, you just need to live long enough!.
Now the real costs are between $25,000 and $155,000 in addition to annual membership fees, signup fees, transportation fees after death etc. That's how much you have to save during your lifetime to get cryopreserved.
The real costs per day of not bothering to optimized how you purchase food also add up over time. Most people I would be willing to bet could save quite a substantial amount of money with some careful thought and planning simply in how they purchase food. $7 a week over an average lifespan is $27,000.
The point of cryonics is that just a bit...
Does the absence of people around me pointing towards my arm insisting it does not move, while believing that I have done plenty of activities in which I used 2 arms mean I am an extreme Anosognosic. One who rewrites massive quantities of 1 arm experiences to 2 arm experiences on the fly?
Auckland, New Zealand
Something I would probably believe:
The AI informs you that it has discovered the purpose of the universe, and part of the purpose is to find the purpose (the rest, apparently, can only be comprehended by philosophical zombies, which you are not one).
Upon finding the purpose, the universe gave the FAI and humanity a score out of 3^^^3 (we got 42) and politely informs the FAI to tell humanity "best of luck next time! next game starts in 5 minutes".
This is so fun that I suspect that we have pushed back the date of friendly AI by at least a day - or we pushed it forward cause we are all now hyper motivated to see who guessed this question right!
We pushed it forward by years, but everyone will be racing to produce an AI that is Friendly in every respect except that it makes their proposal true.
We are probably far more afraid of you than you are of us.
An important consideration not yet mentioned is that risk mitigation is can be difficult to quantify, compared to disaster relief efforts where if you save a house fill of children, you become a hero. Coupled with the fact that people extrapolate the future using the past (which misses all existential risks), the incentive to do anything about it drops pretty much to nil.
It is truth, but you are explicitly saying those words so that the hearer (the patient) forms a false belief about the world. So it cannot be really truthful because most people in that situation would, after hearing example 3, believe that they are being given something that has more affect than a placebo.
Taking progress in AI to mean more real world effectiveness:
Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.
So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe eno...
That's teaching for you, the raw truth of the world can be both difficult to understand in the context of what you already 'know' (Religion -> Evolution) or difficult to understand in its own right (Quantum physics).
This reminds me of "Lies to Humans" as Hex, the thinking machine of Discworld, where Hex tells the Wizards the 'truth' of something, coached in things they understand to basically shut them up, rather than to actually tell them what is really happening.
In general, a person cannot jump from any preconceived notion of how something...
WRT eugenics and other seemly nasty solutions, it is as they say: sometimes it has to get worse to get better. No option that causes, obvious to the voting population, short term harm but long term benefits, to the population as a whole, is going to be considered by politicians that want to be elected again.
It seems to me that the science and rationality that allow more than a shot in the dark probability of some social engineering project to work only came about recently (for example for Eugenics, post Darwin time). By the time that it was possible to do ...
Ease of entry and exit is really important. I want to be able to enter the world and enter a discussion asap, but I don't want to feel compelled to stay for long periods of time.
So I think a browser based program would be best, rather than Second Life.
But I think having a place such as Second Life would be good addition compared to what we have now with LW. Having a a place where people like ourselves can discuss things in practically real time, would, I think, be useful in helping to create this community of Rationalists.
Mechanisms that make it feel like ...
In many cases, I suspect that people adopt false beliefs and the ensuing dark-side for short term emotional gain, but in the long term the instrumental loss outweighs this.
That may be one way of adopting false beliefs the first set of false beliefs. Once the base has been laid (perhaps containing many flaws to hide the falseness), then in evaluating a new belief, it doesn't need to have short term emotional gain to be accepted, as long as it fits in with the current network of beliefs.
When I think of this, I think of missionaries, promising that having ...
Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.
One wonders if in the populations of rationalists (CFAR in particular) that there are naturally mono people who are 'conformed' into being poly?