Filter Today

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: MrMind 27 October 2016 07:13:48AM 2 points [-]

Define "rationality things"...

they're full of guys who'll hit on her no doubt!

So what? Do you think that is different in any street or store? On the opposite I would say that rationality meetups would tend to be more respectful of boundaries than the average group of human beings.

Comment author: Lumifer 27 October 2016 03:02:48PM 1 point [-]

This should be right up LW's alley. Reconstruct dead people as... chatbots? Quote:

And one day it will do things for you, including keeping you alive. You talk to it, and it becomes you.

Comment author: MrMind 27 October 2016 07:08:30AM 1 point [-]

The world is certainly not going to change

We'll see about that...

Comment author: Pfft 28 October 2016 03:11:40AM 0 points [-]

any suggestions?

Comment author: Pfft 28 October 2016 03:08:40AM *  0 points [-]

It sounds pretty spectactular!

I found one paper about comets crashing into the sun, but unfortunately they don't consider as big comets as you do--the largest one is a "Hale-Bopp sized" one, which they take to be 10^15 kg (which already seems a little low, Wikipedia suggests 10^16 kg.)

I guess the biggest uncertainty is how common so big comets are (so, how often should we expect to see one crash into the sun). In particular, I think the known sun-grazing comets are much smaller than the big comet you consider.

Also, I wonder a bit about your 1 second. The paper says,

The primary response, which we consider here, will be fast formation of a localized hot airburst as solar atmospheric gas passes through the bow-sock. Energy from this airburst will propagate outward as prompt electromagnetic radiation (unless or until bottled up by a large increase in optical depth of the surrounding atmosphere as it ionizes), then in a slower secondary phase also involving thermal conduction and mass motion as the expanding hot plume rises.

If a lot of the energy reaching the Earth comes from the prompt radiation, then it should arrive in one big pulse. On the other hand, if the comet plunges deep into the sun, and most of the energy is absorbed and then transmitted via thermal conduction and mass motion, then that must be a much slower process. By comparison, a solar flare involves between 10^20 and 10^25 J, and it takes several minutes to develop.

Comment author: entirelyuseless 28 October 2016 02:19:38AM 0 points [-]

Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)

In particular, when someone fears something happening "accidentally", they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.

In any case I do not concede that it is contained in people's true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.

Comment author: komponisto 28 October 2016 12:09:42AM 0 points [-]

Because you wrote one sentence without actually giving the argument. So I went with my prior on your argument.

That's what I'm suggesting you not do.

Writing out arguments, and in general, making one's thought processes transparent, is a lot of work. We benefit greatly by not having a norm of only stating conclusions that are a small inferential distance away from public knowledge.

I'm not saying you should (necessarily) believe what I say, just because I say it. You just shouldn't jump to the conclusion that I don't have justifications beyond what I have stated or am willing to bother stating.

Cf. Jonah's remark:

If I were to restrict myself to making claims that I could substantiate in a mere ~2 hours, that would preclude the possibility of me sharing the vast majority of what I know.

Comment author: turchin 27 October 2016 09:21:19PM *  0 points [-]

Any insights about the following calculation?

If 100 km size body will fall on the Sun it would produce the flash 1000 times stronger than the Sun’s luminosity for 1 second, which would result in fires and skin burns for humans on day side of Earth.

The calculation is just calculation of energy of impact, and many “ifs” are not accounted, which could weaken consequences or increase them. Such body could be from the family of Sun grazing comets which originate from Oort cloud. The risk is not widely recognized and it is just my idea.

The basis for this calculation is following: Comets hit the Sun with speed of 600 km/s, and mass of 100 km size body (the comets of this size do exist) is 10e18 kg, so the energy of impact is 3.6x10e29 J, while Sun’s luminosity is 3x10e26 W.

Comment author: Manfred 27 October 2016 06:50:30PM 0 points [-]

Do the exercise where you look into the other person's eyes for 10 minutes by the clock, that's a fun social skills building exercise that brings you closer to the person you do it with.

Do pre-mortems of things. I.e. supposing we're 5 years in the future and we look back at the problems we had, what do we expect those problems to have been? Can we change them?

Comment author: ChristianKl 27 October 2016 06:16:44PM 0 points [-]

You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.

Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.

Comment author: Dagon 27 October 2016 06:07:31PM 0 points [-]

I think we can all agree that an entity's anticipated future experiences matter to that entity. I hope (but would be interested to learn otherwise) that imaginary events such as fiction don't matter. In between, there is a hugely wide range of how much it's worth caring about distant events.

I'd argue that outside your light-cone is pretty close to imaginary in terms of care level. I'd also argue that events after your death are pretty unlikely to effect you (modulo basilisk-like punishment or reward).

I actually buy the idea that you care about (and are willing to expend resources on) subjunctive realities on behalf of not-quite-real other people. You get present value from imagining good outcomes for imagined-possible people even if they're not you. This has to get weaker as it gets more distant in time and more tenuous in connection to reality, though.

But that's not even the point I meant to make. Even if you care deeply about the far future for some reason, why is it reasonable to prefer weak, backward, stupid entities over more intelligent and advanced ones? Just because they're made of similar meat-substance as you seems a bit parochial, and hypocritical given the way you treat slightly less-capable organic beings like lettuce.

Woodchopper's post indicated that he'd violently interfere with (indirectly via criminalization) activities that make it infinitesimally more likely to be identified and located by ETs. This is well beyond reason, even if I overstated my long-term lack of care.

Comment author: turchin 27 October 2016 06:01:05PM 0 points [-]

It looks like similar to CEV, but not extrapolated into the future, but applied to a single person desire in the known context. I think it is good approach to make even simple AIs safe. If I ask my robot to take out all spheres from the room it will not cut my head.

Comment author: Manfred 27 October 2016 03:01:51PM 0 points [-]

This is why people sometimes make comments like "goal functions can themselves be learning functions." The problem is that we don't know how to take natural language and unlabeled inputs and get any sort of reasonable utility function as an output.

Comment author: Lumifer 27 October 2016 02:39:17PM 0 points [-]

It'a decent definition of small talk :-)

Uh-uh... yeah... No way! Oh, and then... How could she?! Yep... Hmm... I think so, too... Ummm...

Comment author: Lumifer 27 October 2016 02:37:11PM 0 points [-]

At the level needed to drown out "frequent loud cough" you will need headphones just to escape that white noise X-/

Comment author: Lumifer 27 October 2016 02:34:31PM 0 points [-]

What are some rationality things I can do with my girlfriend?

The NSFW ones.

Comment author: WalterL 27 October 2016 02:18:22PM 0 points [-]

Run that by me one time?

It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the "unimportant" qualifier makes me think that it isn't quite so cut and dried. Can you explain what you mean?

Comment author: entirelyuseless 27 October 2016 01:32:15PM 0 points [-]

I think converting galaxies already includes paying attention, since if you don't know what's there it's difficult to change it into something else.

Maybe you're thinking of this as though it were a fire that just burned things up, but I don't think "converting galaxies" can or will work that way.

Comment author: entirelyuseless 27 October 2016 01:16:24PM 0 points [-]

"Well now I see we disagree at a much more fundamental level." Yes. I've been saying that since the beginning of this conversation.

If humans are optimizers, they must be optimizing for something. Now suppose someone comes to you and says, "do you agree to turn on this CEV machine?", when you respond, are you optimizing for the thing or not? If you say yes, and you are optimizing the original thing, then the CEV cannot (as far as you know) be compromising the thing you were optimizing for. If you say yes and are not optimizing for it, then you are not an optimizer. So you must agree with me on at least one point: either 1) you are not an optimizer, or 2) you should not agree with CEV if it compromises your personal values in any way. I maintain both of those, but you must maintain at least one of them.

In earlier posts I have explained why it is not possible that you are really an optimizer (not during this particular discussion.) People here tend to neglect the fact that an intelligent thing has a body. So e.g. Eliezer believes that an AI is an algorithm, and nothing else. But in fact an AI has a body just as much as we do. And those bodies have various tendencies, and they do not collectively add up to optimizing for anything, except in an abstract sense in which everything is an optimizer, like a rock is an optimizer, and so on.

"We convert the resources of the world into the things we want." To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI -- that it might do that fanatically. We don't.

I understand you think that some creatures could have fundamental values that are perverse from your point of view. This is because you, like Eliezer, think that values are intrinsically arbitrary. I don't, and I have said so from the beginning. It might be true that slave owning values could be fundamental in some exterrestrial race, but if they were, slavery in that race would be very, very different from slavery in the human race, and there would be no reason to oppose it in that race. In fact, you could say that slavery exists in a fundamental way in the human race, and there is no reason to oppose it: parents can tell their kids to stay out of the road, and they have to obey them, whether they want to or not. Note that this is very, very different from the kind of slavery you are concerned about, and there is no reason to oppose the real kind.

Comment author: TheAncientGeek 27 October 2016 12:23:32PM 0 points [-]

Rationality s more than one thing. Even if there are defenses of neoreaction and libertarianism as epistemic rationality, they are open to the criticism that they are not instrumentally rational pursuits because they are too far out to influence anything in the real world.

Comment author: turchin 27 October 2016 12:16:52PM 0 points [-]

Now, about simulation. The fact that they will be run serially is very unlikely apriori, so any probability shift from it will be not high. And could not be known from inside a simulation, or it is not a simulation, or at least completely isolated simulation. But it is not the main objection. The main is that if I know that I am in the exact time moment in future, I also know that I am in simulation, as my time is not the same as outside time provided to me. There is also problems with many my copies in the infinite number of simulation and real worlds, which make total calculation even more difficult. The same me could appear in real world and in simulation, so saying that I am in one specific type of world is meaningless until I get some evidences. I am the same in many worlds. But after I get evidence that I am in a simulation, it is not a simulation.

Comment author: turchin 27 October 2016 10:10:56AM 0 points [-]

Phil of FB: (A more concrete example: 10,000 people are traveling to Mars. 1,000 board a large slow shuttle that takes a single trip to Mars between t1 and t3. Meanwhile, a really fast smaller shuttle takes 10 people at a time to Mars (going back and forth 900 times) during this same period. At time t3, all 10,000 people have safely arrived on Mars. If asked, at t3, whether one took the large slow shuttle or the fast small shuttle, one should say the latter. (Right?) But this is the opposite answer, I believe, that one should give if in the middle of the journey, at time t2, one is aroused from one's hibernation (let's say) and asked whether they are at that very moment on the slow or fast shuttle. Thus, it seems to matter whether the relevant event is ongoing or over. But I’m not exactly clear about why.)

My reply: Imagine there is a random person BOB. If Bob asked before flight to Mars, he will said that he will most likely fly small and quick spaceship. But if we ask a random person during the flight (And if it he is Bob - which is important point here) - than Bob is most likely on a large space plane. But the difference in both situation is that we must add probability that random person will be Bob. And this probability is rather small and will exactly compensate. The fact which is not represented is that there is third group of all travellers, which are already on Mars or wait start on earth, and when I am told that in the moment T3 I still flying, I get information, that I am not one of 8990 "waiters" and update my probabilities accordingly.

Comment author: MrMind 27 October 2016 07:09:19AM 0 points [-]

Sure, but at the moment there isn't too much food, there is too much food here.

View more: Prev