I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.
I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.
Are there a lot of people in th...
He's not a superhuman intelligent paperclipper yet, just human level.
Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.
That certanly is not true of me or of my life overall, except during a few short periods. I don't have the same access to other people's internal state, but I doubt it is true of most people.
There certanly are a significant number people who it may be true of, people who suffer from depression or chronic pain or who are living in other difficult circumstances. I highly doubt that that's the majority of people, though.
Yeah, I'm not sure how to answer this. I would do one set of answers for my personal social environment and a completely different set of answers for my work environment, to such a degree that trying to just average them wouldn't work. I could pick one or the other.
Reference: I teach in an urban high school.
I didn't even know that the survey was happening, sorry.
If you do decide to keep running the survey for a little longer, I'd take it, if that data point helps.
I think you need to try and narrow your focus on exactly what you mean by a "futurist institute" and figure out what specifically you plan on doing before you can think about any of these issues.
Are you thinking about the kind of consulting agency that companies get advice from on what the market might look like in 5 years and what technologies their competitors are using? Or about something like a think-tank that does research and writes papers with the intent on influencing political policy, and is usually supported by donations? Or an aca...
The best tradeoff is when you are well calibrated, just like with everything else.
"Well calibrated" isn't a simple thing, though. It's always a conscious decision of how willing you are to tolerate false positives vs false negatives.
Anyway, I'm not trying to shoot you down here; I really did like your article, and I think you made a good point. Just saying that it's possible to have a great insight and still overshoot or over-correct for a previous mistake you've made, and if you think that almost everyone you see is suffering, you may be doing just that.
There has to be some kind of trade-off here between false positives and false negatives here, doesn't there? If you decide to "use that skill" to see more suffering, isn't it likely that you are getting at least some false positives, some cases where you think someone is suffering and they aren't?
If "happiness" is too vague a term or has too many other meanings we don't necessarily want to imply, we could just say "positive utility". As in "try to notice when you or the people around you are experiencing positive utility".
I do think that actually taking note of that probably does help you move your happiness baseline; it's basically a rationalist version of "be thankful for the good things in your life". Something as simple as "you know, I enjoy walking the dog on a crisp fall day like this". Notici...
Really interesting essay.
It also made me wonder if the opposite is also a skill you need to learn; do people need to learn how to see happiness when that happens around them? Some people seem strangely blind to happiness, even to their own.
To take this a step farther; while this doesn't prove we're not in a simulation, I think if you accept that our universe can't be simulated from a universe that looks like ours, it destroys the whole anthro/ probability argument in favor of simulations, because that argument seems to rely on the claim that we will eventually create a singularity which will simulate a lot of universes like ours. If that's not possible, then the main positive argument for the simulation hypothesis gets a lot weaker, I think.
Maybe there's a higher level universe with more permissive computational constraints, maybe not, but either way I'm not sure I see how you can make a probability argument for or against it.
Does the information theory definition of entropy actually correspond to the physics definition of entropy? I understand what entropy means in terms of physics, but the information theory definition of the terms seemed fundamentally different to me. Is it, or does one actually correspond to the other in some way that I'm not seeing?
Yeah, that's the issue, then. And there's no way around that, no way to just let us temporally log in and confirm our emails later?
I guess, but it's cheaper to observe the sky in reality then it is on youtube. To observe the sky, you just have to look out the window; turning on your computer costs energy and such.
So in order for this to be coherent, I think you have to somehow make the case that our reality is in some extent rare or unlikely or expensive, and I'm not sure how you can do that without knowing more about the creation of the universe then we do, or how "common" the creation of universes is over...some scale (not even sure what scale you would use; over infinite periods of time? Over a multiverse? Does the question even make sense?)
In the simplest example, when you have a closed system where part of the system starts out warmer and the other part starts out cooler, it's fairly intuitive to understand why entropy will usually increase over time until it reaches the maximum level. When two molecule molecule of gas collide, a high-energy (hot) molecule and a lower-energy (cooler) molecule, the most likely result is that some energy will be transferred from the warmer molecule to the cooler molecule. Over time, this process will result in the temperature equalizing.
The math behind this...
It doesn't seem to be working for me; I tried to reset my password, and it keeps saying "user not found", although I doublechecked and it is the same email I have on my account here on lesswrong.
It seems weird to place a "price" on something like the Big Bang and the universe. For all we know, in some state of chaos or quantum uncertainty, the odds of something like a Big Bang happening eventually approaches 100%, which makes it basically "free" by some definition of the term. Especially if something like the Big Bang and the universe happens an infinite number of times, either sequentially or simultaneously.
Again, we don't know that that's true, but we don't know it's not true either.
Yeah, I saw that. In fact looking back on that comment thread, it looks like we had almost the exact same debate there, heh, where I said that I didn't think the simulation hypothesis was impossible but that I didn't see the anthropic argument for it as convincing for several reasons.
But I don't see a practical reason to run few minutes simulations
The main explanation that I've seen for why an advanced AI might run a lot of simulations is in order to better predict how humans would react in different situations (perhaps to learn to better manipulate humans, or to understand human value system, or maybe to achieve whatever theoretically pro-human goal was set in the AI's utility function, ect). If so, then it likely would run a very large number of very short simulations, designed to put uploaded minds in very specific and very clea...
If you're in a simulation, the only reference class that matters is "how long has the simulation been running for". And most likely, for anyone running billions of simulations, the large majority of them are short, only a few minutes or hours. Maybe you could run a simulation that lasts as long as the universe does in subjective time, but most likely there would be far more short simulations.
Basically, I don't think you can use the doomsday argument at all if you're in a simulation, unless you know how long the simulation's been running, which you can't know. You can accept either SA or DA, but you can't use both of them at the same time.
The specific silliness of "humans before business" is pretty straightforward: business is something humans do, and "humans before this thing that humans do" is meaningless or tautological. Business doesn't exist without humans, right?
Eh, it's not as absurd as that. You know how we worry that AI's might optimize something easily quantifiable, but in a way that destroys human value? I think it's entierly reasonable to think that businesses may do the same thing, and optimize for their own profit in a way that destroys human value...
Ideally, you would want to generate enough content for the person who wants to read LW two hours a day, an then promote or highlight the best 5%-10% of the content so someone who has only two hours a week can see it.
Everyone is much better off that way. The person with only two hours a week is getting much better content then if there was much less content to begin with.
Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.
Sure, and if we had anything like the amount of evidence we have for antropic probability theories that we do for quantum theory I'd be glad to go along with it. But short of a lot of evidence, you should be more skeptical of theories that imply all kinds of improbable results.
As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class.
I don't see that at all. Why not classify yourself a...
Let me give a concrete example.
If you take seriously the kind of anthropic probabilistic reasoning that leads to the doomsday argument, then it also invalidates the same argument, because we probably aren't living in the real universe at all, we're probably living in a simulation. Except you're probably not living in a simulation because we're probably living in a short period of time of quantum randomness that appears long after the universe ends which recreates you for a fraction of a second through random chance and then takes you apart again. There sh...
I think the argument probably is false, because arguments of the same type can be used to "prove" a lot of other things that also clearly seem to be false. When you take that kind of anthropomorphic reasoning and take it to it's natural conclusion, you reach a lot of really bizzare places that don't seem to make sense.
In math, it's common for a proof to be disputed by demonstrating that the same form of proof can be used to show something that seems to be clearly false, even if you can't find the exact step where the proof went wrong, and I think the same is true about the doomsday argument.
Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.
I guess my concer...
My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content tha...
Right. Maybe not even that; maybe he just didn't have the willpower required to become a doctor on that exact day, and if he re-takes the class next semester maybe that will be different.
So, to get back to the original point, I think the original poster was worried about not having the willpower to give to charity and, if he doesn't have that, worried he also might not have the higher levels of willpower you would presumably need to do something truly brave if it was needed (like, in his example, resisting someone like the Nazis in 1930's Germany.) And he was able to use that fear in order to increase his willpower and give more to charity.
He might not be wrong about beliefs about himself. Just because a person actually would prefer X to Y, it doesn't mean he is always going to rationally act in a way that will result in X. In a lot of ways we are deeply irrational beings, especially when it comes to issues like short term goals vs long term goals (like charity vs instant rewards).
A person might really want to be a doctor, might spend a huge amount of time and resources working his way through medical school, and then may "run out of willpower" or "suffer from a lack of ...
Sure, that's very possible. Just because it didn't work last time doesn't mean it can't work now with better technology.
I think anyone who goes into it now, though, had better have a really detailed explanation for why consumer interest was so low last time, despite all the attention and publicity the "sharing economy" got in the popular press, and a plan to quickly get a significant customer base this time around. Something like this can't work economically without scale, and I'm just not sure if the consumer interest exists.
Yeah, a number of businesses tried it between 2007 and 2010. SnapGoods was probably the best known. This article lists 8; 7 went out of business, and the 8th one is just limping along with only about 10,000 people signed up for it. (And that one, NeighborGoods, only survived after removing the option to rent something.)
https://www.fastcompany.com/3050775/the-sharing-economy-is-dead-and-we-killed-it
There just wasn't a consumer base interested in the idea, basically. Silicon valley loved to talk about it, people loved writing articles about it, but it...
The other likely outcome seems to be that you keep enough vehicles on hand to satisfy peak demand, and then they just sit quietly in a parking lot the rest of the time.
Probably this.
Then again, it's not all bad, it might be beneficial for the company to get some time between the morning rush hour and the evening rush hour to bring your cars somewhere to be cleaned, to recharge them, do any maintenance and repair, ect. I imagine just cleaning out all the fast food wrappers and whatever out of the cars will be a significant daily job.
It depends on the details. What will happen to traffic? Maybe autonomous cars will be more efficient in terms of traffic, but on the flip side of the coin people may drive more often if driving is more pleasant which might make traffic worse.
Also, if you're using a rental or "uber" model where you rent the autonomous car as part of a service, that kind of service might be a lot better if you're living in a city. It's much easier to make a business model like that work in a dense urban environment, wait times for a automated car to come get you will probably be a lot shorter, ect.
You don't own a drill that sits unused 99,9% of the time, you have a little drone bring you one for an hour for like two dollars.
Just a quick note; people have been predicting exactally this for about 10-15 years because of the internet, and it hasn't happened yet. The "people will rent a hammer instead of buying it" idea was supposed to be the ur-example of the new sharing economy, but it never actually materialized, while instead uber and airB&B and other stuff did. We can speculate about why it didn't happen, but IMHO, it wasn'...
Yeah, that's a fair point.
Sure. Obviously people will always consider trade-offs, in terms of risks, costs, and side effects.
Although it is worth mentioning that if you look at, say, most people with cancer, people seem to be willing to go through extremly difficult and dangerous procedures even to just have a small chance of extending lifespan a little bit. But perhaps people will be less willing to do that with a more vague problem like "aging"? Hard to say.
I don't think it will stay like that, though. Maybe the first commercially available aging treatment will be borderline enough that's it's a reasonable debate if it's worthwhile, but I expect them to continue improve from that point.
I don't believe that my vote will change a result of a presidential election, but I have to behave as if it will, and go to vote.
The way I think of this is something like this:
There is something like a 1 in 10 million chance that my vote will affect the presidential election (and also some chance of my voting affecting other important elections, like Congress, Governor, ect).
Each year, the federal government spends $3.9 trillion dollars. It's influence is probably actually significantly greater then that, since that doesn't include the effect of law...
It would certainly have to depend on the details, since obviously many people do not choose the longevity treatments that are already available, like healthy eating and exercise, even though they are usually not very expensive.
Eh. That seems to be a pretty different question.
Let's say that an hour of exercise a day will extend your lifespan by 5 years. If you sleep 8 hours a night, that's about 6.3% of your waking time; if you live 85 years without exercise vs 90 years with exercise, you probably have close to the same amount of non-exercising wa...
This is a falsifiable empirical prediction. We will see whether it turns out to be true or not.
Yes, agreed.
I should probably be more precise. I don't think that 100% of people will necessarally choose longevity treatments once they become available. But depending on the details, I think it will be pretty high. A think that a very high percentage of people who today sound ambivalent about it will go to great lengths to get it once it becomes something that exists in reality.
I also think that the concern that "other people" will get to live ...
I don't think lack of life extension research funding actually comes from people not wanting to live, I think it has more to do with the fact that the vast majority of people don't take it seriously yet and don't beleive that we could actually significantly change our lifespan. That's compounded with a kind of "sour grapes" defensive reflex where when people think they can never get something they try to convince themselves they don't really want it.
I think that if progress is made that at some point there will be a phase change where, when more people start to realize that it is possible and suddenly flip from not caring at all to caring a great deal.
You can use relativity to demonstrate that certain events can happen simultaneity in on reference frame and not in others, but I'm not seeing any way to do that in this case, assuming that the simulated and non-simulated future civilizations are both in the same inertial reference frame. Am I missing something?
That's one of the advantages to what's known as "preference utilitarianism". It defines utility in terms of the preference of people; so, if you have a strong preference towards remaining alive, then remaining alive is therefore the pro-utility option.
The answer to those objections, by the way, is that an "adequately objective" metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information.
Elizer attempted to deal with that problem by defining a certain set of things as "h-right", that is, morally right from the frame of reference of the human mind. He made clear that alien entities probably would not care about what is h-right, but that humans do, and that's good enough.
I don't think that's actually true.
Even if it was, I don't think you can say you have a belief if you haven't actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don't think it's accurate to say "you believe D" until you've actually figured out that it logically follows from A, B, and C.
If you've never even thought of something I don't think you can say that you "believe" it..
Except by their nature, if you're not storing them, then the next one is not true.
Let me put it this way.
Step 1: You have a thought that X is true. (Let's call this 1 bit of information.)
Step 2: You notice yourself thinking step 1. Now you say "I appear to believe that X is true." (Now this is 2 bits of information; x and belief in x")
Step 3: You notice yourself thinking step 2. Now you say "I appear to believe that I believe that X is true." (3 bits of information, x, belief in x, and belief in belief in x.)
If at any poin...
Fair.
I actually think a bigger weakness in your argument is here:
I believe that I believe that I believe that I exist. And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer's (fatuous) requirements for assigning a certain level of confidence to a proposition.
That can't actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.
I think the best possible argument against "I think, therefore I am" is that there may be something either confused or oversimplified about either your definition of "I", your definition of "think", or your definition of "am".
"I" as a concept might turn out to not really have much meaning as we learn more about the brain, for example, in which case the most you could really say would be "Something thinks therefore something thinks" which loses a lot of the punch of the original.
Here's a question. As humans, we have the inherent flexibility to declare that something has either a probability of zero or a probability of one, and then the ability to still change our minds later if somehow that seems warranted.
You might declare that there's a zero probability that I have the ability to inflict infinite negitive utility on you, but if then I take you back to a warehouse where I have banks of computers that I can mathematically demonstrate contain uploaded minds which are going to suffer in the equivalent of hell for an infinite amo...
Uh. About 10 posts ago I linked you to a long list of published scientific papers, many of which you can access online. If you wanted to see the data, you easily could have.
I was watching part of your video, and I'm really surprised that you think that LessWrong doesn't have what you call "paths fowrard", that is, ways for people who disagree to find a path towards considering where they may be wrong and trying to hear the other person's point of view. In fact, that's actually a huge focus around here, and a lot has been written about ways to do that.