All of Yosarian2's Comments + Replies

I was watching part of your video, and I'm really surprised that you think that LessWrong doesn't have what you call "paths fowrard", that is, ways for people who disagree to find a path towards considering where they may be wrong and trying to hear the other person's point of view. In fact, that's actually a huge focus around here, and a lot has been written about ways to do that.

I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.

I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.

Are there a lot of people in th... (read more)

0SquirrelInHell
It's not that they think it cannot do major things at all. They don't expect to do be able to do them overnight, and yes "major changes to subconscious programming overnight" is one of the things I've seen to be possible if you hit the right buttons. And of course, if you can do major things overnight, there are some even more major things you find yourself being able to do at all, and you couldn't before.

He's not a superhuman intelligent paperclipper yet, just human level.

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

That certanly is not true of me or of my life overall, except during a few short periods. I don't have the same access to other people's internal state, but I doubt it is true of most people.

There certanly are a significant number people who it may be true of, people who suffer from depression or chronic pain or who are living in other difficult circumstances. I highly doubt that that's the majority of people, though.

Yeah, I'm not sure how to answer this. I would do one set of answers for my personal social environment and a completely different set of answers for my work environment, to such a degree that trying to just average them wouldn't work. I could pick one or the other.

Reference: I teach in an urban high school.

2Gunnar_Zarncke
I would average. After all even in one environment there are very many samples I guess, even if they cluster. But don't worry too much. It's just an LW poll :-)

I didn't even know that the survey was happening, sorry.

If you do decide to keep running the survey for a little longer, I'd take it, if that data point helps.

I think you need to try and narrow your focus on exactly what you mean by a "futurist institute" and figure out what specifically you plan on doing before you can think about any of these issues.

Are you thinking about the kind of consulting agency that companies get advice from on what the market might look like in 5 years and what technologies their competitors are using? Or about something like a think-tank that does research and writes papers with the intent on influencing political policy, and is usually supported by donations? Or an aca... (read more)

1fowlertm
You're right. Here is a reply I left on a Reddit thread answering this question: This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock). I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hanging fruit to be picked in the service of this goal. For example I recently convinced the organizer for the futurist group to switch to a regular spot at the local library instead of the nigh-impossible-to-find hackerspace at which they were doing it before. I've also done things like buy pizza for everyone. Once we get to where we have a nice, clean, well-lit venue and have at least 20 people regularly attending, I'd like to start reaching out to local businesses, writers, artists, and academics to have them give talks to the group. As it stands it probably wouldn't be worth their time just to speak to 8 people. TEDxMileHigh does something vaguely like this, but it isn't as focused and only occurs once per year. Once I get that lined out, I'd like the group's first 'product' to be a near-comprehensive 'talent audit' for the Denver/Boulder region. If I had a billion dollars and wanted to invest it in the highest-impact companies and research groups I'd have no idea of where to get started. Here are some questions I'd like to answer: What are the biggest research and investment initiatives currently happening? Is there more brainpower in nanotech or AI? In neurotech or SENS-type fields? AFAICT nobody knows. Who is doing the most investing? What kind of capital is there available from hedgefunds or angel investors? What sorts of bridges exist between academia, the private sector, think tanks, and investment fir

The best tradeoff is when you are well calibrated, just like with everything else.

"Well calibrated" isn't a simple thing, though. It's always a conscious decision of how willing you are to tolerate false positives vs false negatives.

Anyway, I'm not trying to shoot you down here; I really did like your article, and I think you made a good point. Just saying that it's possible to have a great insight and still overshoot or over-correct for a previous mistake you've made, and if you think that almost everyone you see is suffering, you may be doing just that.

0SquirrelInHell
I beg to differ; being well calibrated has a mathematically precise definition. E.g. if you are thinking of a binary suffering/not suffering classification (oversimplified but it's just to make a point), then I want my perception to assign such probabilities, that if you compare with true answers, cross-entropy is minimized. That's pretty much what I care about when I'm fixing my perception. Of course there's the question of how aware at each moment you want to be of certain information. But you want to be well calibrated nonetheless. Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

There has to be some kind of trade-off here between false positives and false negatives here, doesn't there? If you decide to "use that skill" to see more suffering, isn't it likely that you are getting at least some false positives, some cases where you think someone is suffering and they aren't?

2SquirrelInHell
The best tradeoff is when you are well calibrated, just like with everything else. In the "selfish default" you basically never have false positives, but also you have false negatives like all the time. So duh.

If "happiness" is too vague a term or has too many other meanings we don't necessarily want to imply, we could just say "positive utility". As in "try to notice when you or the people around you are experiencing positive utility".

I do think that actually taking note of that probably does help you move your happiness baseline; it's basically a rationalist version of "be thankful for the good things in your life". Something as simple as "you know, I enjoy walking the dog on a crisp fall day like this". Notici... (read more)

Really interesting essay.

It also made me wonder if the opposite is also a skill you need to learn; do people need to learn how to see happiness when that happens around them? Some people seem strangely blind to happiness, even to their own.

2SquirrelInHell
Hmm, interesting point! On one hand, my intuition suggest that "happiness" is ill defined as a thing to do work here (sorry if it sounds annoyingly mysterious, I'm not sure what that'd mean exactly too!), and thinking in these terms can only take you so far. OTOH, there's definitely some stuff you can do to push your "happiness baseline" around a little bit, and I think some people from rationality blogosphere had reports on this (Agenty Duck? can't find it).

To take this a step farther; while this doesn't prove we're not in a simulation, I think if you accept that our universe can't be simulated from a universe that looks like ours, it destroys the whole anthro/ probability argument in favor of simulations, because that argument seems to rely on the claim that we will eventually create a singularity which will simulate a lot of universes like ours. If that's not possible, then the main positive argument for the simulation hypothesis gets a lot weaker, I think.

Maybe there's a higher level universe with more permissive computational constraints, maybe not, but either way I'm not sure I see how you can make a probability argument for or against it.

Does the information theory definition of entropy actually correspond to the physics definition of entropy? I understand what entropy means in terms of physics, but the information theory definition of the terms seemed fundamentally different to me. Is it, or does one actually correspond to the other in some way that I'm not seeing?

3IlyaShpitser
Shannon's definition of entropy corresponds very closely to the definition of entropy used in statistical mechanics. It's slightly more general and devoid of "physics baggage" (macro states and so on). Analogy: Ising model of spin glasses vs undirected graphical models (Markov random fields). The former has a lot of baggage like "magnetization, external field, energy." The latter is just a statistical model of conditional independence on a graph. The Ising model is a special case (in fact the first developed case, back in 1910) of a Markov random field. ---------------------------------------- Physicists have a really good nose for models.

Yeah, that's the issue, then. And there's no way around that, no way to just let us temporally log in and confirm our emails later?

I guess, but it's cheaper to observe the sky in reality then it is on youtube. To observe the sky, you just have to look out the window; turning on your computer costs energy and such.

So in order for this to be coherent, I think you have to somehow make the case that our reality is in some extent rare or unlikely or expensive, and I'm not sure how you can do that without knowing more about the creation of the universe then we do, or how "common" the creation of universes is over...some scale (not even sure what scale you would use; over infinite periods of time? Over a multiverse? Does the question even make sense?)

0turchin
In that case, I use just the same logic as Bostrom: each real civilization creates zillions of copies of some experiences. It already happened in form of dreams, movies and pictures. Thus I normalize by the number of existing civilization and don't have obscure questions about the nature of the universe or price of the big bang. I just assumed that inside the civilization rare experiences are often faked. They are rare because they are in some way expensive to create, like diamonds or volcanic observation, but their copies are cheap, like glass or pictures.

In the simplest example, when you have a closed system where part of the system starts out warmer and the other part starts out cooler, it's fairly intuitive to understand why entropy will usually increase over time until it reaches the maximum level. When two molecule molecule of gas collide, a high-energy (hot) molecule and a lower-energy (cooler) molecule, the most likely result is that some energy will be transferred from the warmer molecule to the cooler molecule. Over time, this process will result in the temperature equalizing.

The math behind this... (read more)

It doesn't seem to be working for me; I tried to reset my password, and it keeps saying "user not found", although I doublechecked and it is the same email I have on my account here on lesswrong.

0Habryka
We have a copy of the database from 3 months ago (will be updating to a more recent one on launch), but this means that if you added an email to your account later than that we might not have it.
3HungryHippo
Same here. I just made a new account.

It seems weird to place a "price" on something like the Big Bang and the universe. For all we know, in some state of chaos or quantum uncertainty, the odds of something like a Big Bang happening eventually approaches 100%, which makes it basically "free" by some definition of the term. Especially if something like the Big Bang and the universe happens an infinite number of times, either sequentially or simultaneously.

Again, we don't know that that's true, but we don't know it's not true either.

0turchin
Maybe more correct is to say the price of the observation. It is cheaper to see a volcanic eruption in youtube than in reality.

Yeah, I saw that. In fact looking back on that comment thread, it looks like we had almost the exact same debate there, heh, where I said that I didn't think the simulation hypothesis was impossible but that I didn't see the anthropic argument for it as convincing for several reasons.

0turchin
Probably I also said it before, but SA is in fact comparison of prices. And it basically says that cheaper things are more often, and fakes are cheaper than real things. That is why we more often see images of a nuclear blast than real one. And yes, there are many short simulations in our world, like dreams, thoughts, clips, pictures.

But I don't see a practical reason to run few minutes simulations

The main explanation that I've seen for why an advanced AI might run a lot of simulations is in order to better predict how humans would react in different situations (perhaps to learn to better manipulate humans, or to understand human value system, or maybe to achieve whatever theoretically pro-human goal was set in the AI's utility function, ect). If so, then it likely would run a very large number of very short simulations, designed to put uploaded minds in very specific and very clea... (read more)

0turchin
Sounds convincing. I will think about it. Did you see my map of the simulation argument by the way? http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

If you're in a simulation, the only reference class that matters is "how long has the simulation been running for". And most likely, for anyone running billions of simulations, the large majority of them are short, only a few minutes or hours. Maybe you could run a simulation that lasts as long as the universe does in subjective time, but most likely there would be far more short simulations.

Basically, I don't think you can use the doomsday argument at all if you're in a simulation, unless you know how long the simulation's been running, which you can't know. You can accept either SA or DA, but you can't use both of them at the same time.

0turchin
I agree that in the simulation one could have fake memories of the past of the simulation. But I don't see a practical reason to run few minutes simulations (unless of a very important event) - fermi-solving simulation must run from the beginning of 20 century and until the civilization ends. Game-simulations also will be probably life-long. Even resurrection-simulations should be also lifelong. So I think that typical simulation length is around one human life. (one exception I could imagine - intense respawning in case of some problematic moment. In that case, there will be many respawnings around possible death event, but consequences of this idea is worrisome) If we apply DA to the simulation, we should probably count false memories as real memories, because the length of false memories is also random, and there is no actual difference between precalculating false memories and actually running a simulation. However, the termination of the simulation is real.

The specific silliness of "humans before business" is pretty straightforward: business is something humans do, and "humans before this thing that humans do" is meaningless or tautological. Business doesn't exist without humans, right?

Eh, it's not as absurd as that. You know how we worry that AI's might optimize something easily quantifiable, but in a way that destroys human value? I think it's entierly reasonable to think that businesses may do the same thing, and optimize for their own profit in a way that destroys human value... (read more)

Ideally, you would want to generate enough content for the person who wants to read LW two hours a day, an then promote or highlight the best 5%-10% of the content so someone who has only two hours a week can see it.

Everyone is much better off that way. The person with only two hours a week is getting much better content then if there was much less content to begin with.

1Viliam
If LW2 remembers who read what, I guess "a list of articles you haven't read yet, ordered by highest karma, and secondarily by most recent" would be a nice feature that would scale automatically.

Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

Sure, and if we had anything like the amount of evidence we have for antropic probability theories that we do for quantum theory I'd be glad to go along with it. But short of a lot of evidence, you should be more skeptical of theories that imply all kinds of improbable results.

As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class.

I don't see that at all. Why not classify yourself a... (read more)

1turchin
I am a member of a class of beings, able to think about Doomsday argument, and it is the only correct referent class. And for these class, my day is very typical: I live in advance civilization interested in such things and start to discuss the problem of DA in the morning. I can't say that I am randomly chosen from hunter-gathers, as they were not able to think about DA. However, I could observe some independent events (if they are independent of my existence) in a random moment of time of their existence and thus predict their duration. It will not help to predict the duration of existence of hunter-gathers, as it is not truly independent of my existence. But could help in other cases. 20 minutes ago I participate in shooting in my house - but it was just a night dream, and it supports simulation argument, which basically claims that most events I observe are unreal, as their simulation is cheaper. I participate during my life in hundreds shooting in dreams, games and movies, but never in real one: simulated events are much more often. Thus DA and SA are not too bizarre, they become bizarre because of incorrect solving of the reference class problem. The strangeness of DA appears when we try to compare it with some unrealistic expectations about our future: that there will be billion of years full of billion people living in human-like civilization. But more probable is that in several decades AI will appear, which will run many past simulations (and probably kill most humans). It is exactly what we could expect from observed technological progress, and DA and SA just confirm observed trends.

Let me give a concrete example.

If you take seriously the kind of anthropic probabilistic reasoning that leads to the doomsday argument, then it also invalidates the same argument, because we probably aren't living in the real universe at all, we're probably living in a simulation. Except you're probably not living in a simulation because we're probably living in a short period of time of quantum randomness that appears long after the universe ends which recreates you for a fraction of a second through random chance and then takes you apart again. There sh... (read more)

0turchin
It is not a bug, it is a feature :) Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false. I think that DA and simulation argument are both true, as they support each other. Adding Boltzmann brains is more complicated, but I don't see a problem to be a BB, as there is a way to create a coherent world picture using only BB and path in the space of possible minds, but I would not elaborate here as I can't do it shortly. :) As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class. However, if we take different classes, we get a prediction for different events: for example, class of humans will extinct soon, but the class of animals could exist for billion more years, and it is quite a possible outcome: humans will extinct, but animals survive. There is nothing mysterious in reference classes, just different answers for different questions. The measure is the real problem, I think so. The theory of DA is testable if we apply it to many smaller examples like Gott successfully did for predicting the length of the Broadway shows. So the theory is testable, no more weird than other theories we use, and there is no contradiction between doomsday argument and simulation argument (they both mean that there are many past simulations which will be turned off soon). However, it still could be false or have one more turn, which will make things even weirder, like if we try to account for mathematically possible observers or multilevel simulations or Boltzmann AIs.

I think the argument probably is false, because arguments of the same type can be used to "prove" a lot of other things that also clearly seem to be false. When you take that kind of anthropomorphic reasoning and take it to it's natural conclusion, you reach a lot of really bizzare places that don't seem to make sense.

In math, it's common for a proof to be disputed by demonstrating that the same form of proof can be used to show something that seems to be clearly false, even if you can't find the exact step where the proof went wrong, and I think the same is true about the doomsday argument.

1turchin
I think the opposite: Doomsday argument (in one form of it) is an effective predictor in many common situations, and thus it also could be allied to the duration of human civilization. DA is not absurd: our expectations about human future are absurd. For example, I could predict medium human life expectancy based on supposedly random my age. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.

I guess my concer... (read more)

My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content tha... (read more)

0Viliam
That depends on how much time you actually want to spend reading LW. I mean, the optimal quantity will be different for a person who reads LW two hours a day, or a person who reads LW two hours a week. Now the question is which one of these should we optimize LW for? The former seems more loyal, but the latter is probably more instrumentally rational if we agree that people should be doing things besides reading web. (Also, these days LW competes for time with SSC and others.)
7Habryka
Agree with this. I do however think that we actually have a really large stream of high-quality-content already in the broader rationality diaspora that we just need to tap into and get onto the new page. As such, the problem is a bit easier than getting a ton of new content creators, and is instead more of a problem of building something that the current content creators want to move towards. And as soon as we have a high-quality stream of new content I think it will be easier to attract new writers who will be looking to expand their audience.

Right. Maybe not even that; maybe he just didn't have the willpower required to become a doctor on that exact day, and if he re-takes the class next semester maybe that will be different.

So, to get back to the original point, I think the original poster was worried about not having the willpower to give to charity and, if he doesn't have that, worried he also might not have the higher levels of willpower you would presumably need to do something truly brave if it was needed (like, in his example, resisting someone like the Nazis in 1930's Germany.) And he was able to use that fear in order to increase his willpower and give more to charity.

He might not be wrong about beliefs about himself. Just because a person actually would prefer X to Y, it doesn't mean he is always going to rationally act in a way that will result in X. In a lot of ways we are deeply irrational beings, especially when it comes to issues like short term goals vs long term goals (like charity vs instant rewards).

A person might really want to be a doctor, might spend a huge amount of time and resources working his way through medical school, and then may "run out of willpower" or "suffer from a lack of ... (read more)

0tadasdatys
You're right. Instead it means that he doesn't have the willpower required to become a doctor. Presumably, this is something he didn't know before he started school.

Sure, that's very possible. Just because it didn't work last time doesn't mean it can't work now with better technology.

I think anyone who goes into it now, though, had better have a really detailed explanation for why consumer interest was so low last time, despite all the attention and publicity the "sharing economy" got in the popular press, and a plan to quickly get a significant customer base this time around. Something like this can't work economically without scale, and I'm just not sure if the consumer interest exists.

2chaosmage
You make excellent points. I hadn't even heard of SnapGoods, NeighborGoods etc. I'm imagining it not as a peer to peer service, but more along the lines of a car rental company that owns a fleet of things it rents out. I think you're right about the need to build a significant customer base rather quickly. My guess is that it might be feasible to first offer big expensive things that people don't usually own already, like a fancy jacuzzi, a top end VR rig, a complete "wedding size" soundsystem and a bouncy castle. And once you're known for those, work your way down into more normal consumer goods, guided by the requests of your first customers.

Yeah, a number of businesses tried it between 2007 and 2010. SnapGoods was probably the best known. This article lists 8; 7 went out of business, and the 8th one is just limping along with only about 10,000 people signed up for it. (And that one, NeighborGoods, only survived after removing the option to rent something.)

https://www.fastcompany.com/3050775/the-sharing-economy-is-dead-and-we-killed-it

There just wasn't a consumer base interested in the idea, basically. Silicon valley loved to talk about it, people loved writing articles about it, but it... (read more)

0ChristianKl
When it comes to people doing car-sharing, it doesn't well between private person and it seems like the Uber model won out. On the same token a company that's more like Uber and that has much lower transaction costs due to self driving cars has a higher chance of success. Timing is very important when it comes to startups. Where webvan failed, instantcart does much better.

The other likely outcome seems to be that you keep enough vehicles on hand to satisfy peak demand, and then they just sit quietly in a parking lot the rest of the time.

Probably this.

Then again, it's not all bad, it might be beneficial for the company to get some time between the morning rush hour and the evening rush hour to bring your cars somewhere to be cleaned, to recharge them, do any maintenance and repair, ect. I imagine just cleaning out all the fast food wrappers and whatever out of the cars will be a significant daily job.

It depends on the details. What will happen to traffic? Maybe autonomous cars will be more efficient in terms of traffic, but on the flip side of the coin people may drive more often if driving is more pleasant which might make traffic worse.

Also, if you're using a rental or "uber" model where you rent the autonomous car as part of a service, that kind of service might be a lot better if you're living in a city. It's much easier to make a business model like that work in a dense urban environment, wait times for a automated car to come get you will probably be a lot shorter, ect.

1Screwtape
Here's something I've been curious about: If you're running an autonomous car rental, what do you do with peak load times? I have to imagine there's a drastic difference in demand between 9am~11am vs 5pm~7pm, and an even larger difference between those and 2am~4am. Part of me thinks demand drives prices and everyone shifts their arrival/departure times a little to try and find lower transport costs, but I also would assume that rush hour traffic would do that all on its own. The other likely outcome seems to be that you keep enough vehicles on hand to satisfy peak demand, and then they just sit quietly in a parking lot the rest of the time. Half of my daily commute takes place on a single lane dirt road, and I therefore have no idea why people endure heavy traffic unless they have completely inflexible work hours. Does anyone have any ideas?

You don't own a drill that sits unused 99,9% of the time, you have a little drone bring you one for an hour for like two dollars.

Just a quick note; people have been predicting exactally this for about 10-15 years because of the internet, and it hasn't happened yet. The "people will rent a hammer instead of buying it" idea was supposed to be the ur-example of the new sharing economy, but it never actually materialized, while instead uber and airB&B and other stuff did. We can speculate about why it didn't happen, but IMHO, it wasn'... (read more)

0ChristianKl
If a 20 dollar tool that costs 5 dollars to rent that's not worthwhile. It has to get much cheaper and if you would have delievery within a 30 minute window for 50 cent instead of 5 dollar that would change the usage patterns. Are there real businesses that tried the model and that had other issues than simply being to expensive?

Sure. Obviously people will always consider trade-offs, in terms of risks, costs, and side effects.

Although it is worth mentioning that if you look at, say, most people with cancer, people seem to be willing to go through extremly difficult and dangerous procedures even to just have a small chance of extending lifespan a little bit. But perhaps people will be less willing to do that with a more vague problem like "aging"? Hard to say.

I don't think it will stay like that, though. Maybe the first commercially available aging treatment will be borderline enough that's it's a reasonable debate if it's worthwhile, but I expect them to continue improve from that point.

I don't believe that my vote will change a result of a presidential election, but I have to behave as if it will, and go to vote.

The way I think of this is something like this:

There is something like a 1 in 10 million chance that my vote will affect the presidential election (and also some chance of my voting affecting other important elections, like Congress, Governor, ect).

Each year, the federal government spends $3.9 trillion dollars. It's influence is probably actually significantly greater then that, since that doesn't include the effect of law... (read more)

1turchin
Yes, but there are situation when the race is not tight, like 40 to 60, and it is very improbable that my vote will work alone, but if we assume that something like ADT works, all people similar to me will behave as if I command them and total utility will be millions time more - as my vote will turn in million votes of people similar to me.

It would certainly have to depend on the details, since obviously many people do not choose the longevity treatments that are already available, like healthy eating and exercise, even though they are usually not very expensive.

Eh. That seems to be a pretty different question.

Let's say that an hour of exercise a day will extend your lifespan by 5 years. If you sleep 8 hours a night, that's about 6.3% of your waking time; if you live 85 years without exercise vs 90 years with exercise, you probably have close to the same amount of non-exercising wa... (read more)

0entirelyuseless
It is an assumption that it will be that easy. If there is a complicated surgery that will extend people's lives by 5 years, or even by 20, it is likely that many people will not want it.

This is a falsifiable empirical prediction. We will see whether it turns out to be true or not.

Yes, agreed.

I should probably be more precise. I don't think that 100% of people will necessarally choose longevity treatments once they become available. But depending on the details, I think it will be pretty high. A think that a very high percentage of people who today sound ambivalent about it will go to great lengths to get it once it becomes something that exists in reality.

I also think that the concern that "other people" will get to live ... (read more)

0entirelyuseless
It would certainly have to depend on the details, since obviously many people do not choose the longevity treatments that are already available, like healthy eating and exercise, even though they are usually not very expensive. Sure, maybe someone will be more motivated by an extra 50-100 years than by an extra 5-15. But then again maybe they won't.

I don't think lack of life extension research funding actually comes from people not wanting to live, I think it has more to do with the fact that the vast majority of people don't take it seriously yet and don't beleive that we could actually significantly change our lifespan. That's compounded with a kind of "sour grapes" defensive reflex where when people think they can never get something they try to convince themselves they don't really want it.

I think that if progress is made that at some point there will be a phase change where, when more people start to realize that it is possible and suddenly flip from not caring at all to caring a great deal.

2entirelyuseless
This is a falsifiable empirical prediction. We will see whether it turns out to be true or not. I think more likely you will see some ambivalence in people's response. I do see many people around the age of 80 who think they have lived long enough, and it pretty clearly has nothing to do with their state of health. I accept the same thing to happen in many cases even after aging can be prevented biologically. Calling it "sour grapes" is just not recognizing that some people are different from you.

You can use relativity to demonstrate that certain events can happen simultaneity in on reference frame and not in others, but I'm not seeing any way to do that in this case, assuming that the simulated and non-simulated future civilizations are both in the same inertial reference frame. Am I missing something?

That's one of the advantages to what's known as "preference utilitarianism". It defines utility in terms of the preference of people; so, if you have a strong preference towards remaining alive, then remaining alive is therefore the pro-utility option.

2entirelyuseless
The problem with this (from the point of view of people like turchin) is that many people do not show many signs of not wanting to die. He mentioned this recently.
1turchin
Thanks for pointing on right term.

The answer to those objections, by the way, is that an "adequately objective" metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information.

Elizer attempted to deal with that problem by defining a certain set of things as "h-right", that is, morally right from the frame of reference of the human mind. He made clear that alien entities probably would not care about what is h-right, but that humans do, and that's good enough.

I don't think that's actually true.

Even if it was, I don't think you can say you have a belief if you haven't actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don't think it's accurate to say "you believe D" until you've actually figured out that it logically follows from A, B, and C.

If you've never even thought of something I don't think you can say that you "believe" it..

Except by their nature, if you're not storing them, then the next one is not true.

Let me put it this way.

Step 1: You have a thought that X is true. (Let's call this 1 bit of information.)

Step 2: You notice yourself thinking step 1. Now you say "I appear to believe that X is true." (Now this is 2 bits of information; x and belief in x")

Step 3: You notice yourself thinking step 2. Now you say "I appear to believe that I believe that X is true." (3 bits of information, x, belief in x, and belief in belief in x.)

If at any poin... (read more)

1DragonGod
I see. I thought that you don't actually have to store those beliefs in your head. My idea was: Do you disagree?

Fair.

I actually think a bigger weakness in your argument is here:

I believe that I believe that I believe that I exist. And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer's (fatuous) requirements for assigning a certain level of confidence to a proposition.

That can't actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.

0DragonGod
I do not need to actually store those beliefs—it is only necessary to be able to state them—and I wrote a program that outputs those beliefs.

I think the best possible argument against "I think, therefore I am" is that there may be something either confused or oversimplified about either your definition of "I", your definition of "think", or your definition of "am".

"I" as a concept might turn out to not really have much meaning as we learn more about the brain, for example, in which case the most you could really say would be "Something thinks therefore something thinks" which loses a lot of the punch of the original.

0DragonGod
I have a coherent definition of "I".

Here's a question. As humans, we have the inherent flexibility to declare that something has either a probability of zero or a probability of one, and then the ability to still change our minds later if somehow that seems warranted.

You might declare that there's a zero probability that I have the ability to inflict infinite negitive utility on you, but if then I take you back to a warehouse where I have banks of computers that I can mathematically demonstrate contain uploaded minds which are going to suffer in the equivalent of hell for an infinite amo... (read more)

Uh. About 10 posts ago I linked you to a long list of published scientific papers, many of which you can access online. If you wanted to see the data, you easily could have.

Load More