All of David_Bolin's Comments + Replies

"If that was so, they'd get the same wobbly feeling on hearing the fire alarm, or even more so, because fire alarms correlate to fire less than does smoke coming from under a door. "

I do get that feeling even more so, in exactly that situation. I therefore normally do not respond to fire alarms.

I don't think the belief in life after death necessarily indicates a wish to live longer than we currently do. I think it is a result of the fact that it appears to people to be incoherent to expect your consciousness to cease to be: if you expect that to happen, what experience will fulfill that expectation?

Obviously none. The only expectation that could theoretically be fulfilled by experience is expecting your consciousness to continue to exist. This doesn't actually prove that your consciousness will in fact continue to exist, but it is probably the r... (read more)

0Ishaan
Yeah, in general, I'm sure part of it is that humans can't easily conceptualize true death in the first place (but that's even further grounds for not taking them seriously when they say they want to die). Just like part of it is our instinctive animism/anthropomorphism. I certainly don't want to minimize the role of "cognitive illusions" in the whole thing. But I don't think it's a coincidence that these beliefs depict the universe as fairly utopian - the afterlife often resolves misunderstandings, rebalances moral scales, makes room for further growth... and earthly suffering is generally given higher purpose. Remember - a true human utopia doesn't give its members all they think they desire, or eliminate the sort of suffering which serves a deeper human value, fiction is replete with failed utopias along those lines. Despite all the terrible things, we could be in a utopia right now if only we have sufficiently optimistic beliefs about what happens outside the narrow window of our worldly experiences. Is it a coincidence that religions often have precisely these optimistic beliefs? Anyway, I doubt you need to get into "what does the mouse expect" to explain that particular result: Very young children also lack the theory of mind to understand that not everyone has the same information as they do. If the mouse had simply left the room and the alligator ate the mouse's friend squirrel, they might say the mouse was sad and angry (not realizing that the mouse was gone from the room and wouldn't know about what the alligator did). .

Ok. In that sense I agree that this is likely to be the case, and would be the case more often than not with any educated person's assessment of who does rigorous work.

It is not that these statements are "not generally valid", but that they are not included within the axiom system used by H. If we attempt to include them, there will be a new statement of the same kind which is not included.

Obviously such statements will be true if H's axiom system is true, and in that sense they are always valid.

2TezlaKoil
The intended meaning of valid in my post is "valid step in a proof" in the given formal system. I reworded the offending section. Yes, and one also has to be careful with the use of the word "true". There are models in which the axioms are true, but which contain counterexamples to Provable(#φ) → φ.

How does this not come down to saying that people you consider rigorous, on average did more work on their texts than people you don't consider rigorous, and therefore they wrote less as a whole?

If we take a random (educated) person, and ask him to classify authors into rigorous and non-rigorous, something similar should be true on average, and we should find similar statistics. I can't see how that shows some deep truth about the nature of rigorous thought, except that it means doing more work in your thinking.

I agree that it does mean at least that, so that e.g. some author has written more than 100 books, that is a pretty good sign that he is not worth reading, even if it is not a conclusive one.

5PhilGoetz
That is what it comes down to. I'm not trying to show any truth about the nature of rigorous thought.

I looked at your specified program. The case there is basically the same as the situation I mentioned, where I say "you are going to think this is false." There is no way for you to have a true opinion about that, but there is a way for other people to have a true opinion about it.

In the same way, you haven't proved that no one and nothing can prove that the program will not halt. You simply prove that there is no proof in the particular language and axioms used by your program. When you proved that program will not halt, you were using a differe... (read more)

I said "so the probability that a thing doesn't exist will be equal to or higher than etc." exactly because the probability would be equal if non-existence and logical impossibility turned out to be equivalent.

If you don't agree that no logically impossible thing exists, then of course you might disagree with this probability assignment.

Also, there is definitely some objective fact where you cannot get the right answer:

"After thinking about it, you will decide that this statement is false, and you will not change your mind."

If you conclude that this is false, then the statement will be true. No paradox, but you are wrong.

If you conclude that this is true, then the statement will be false. No paradox, but you are wrong.

If you make no conclusion, or continuously change your mind, then the statement will be false. No paradox, but the statement is undecidable to you.

There is no program such that no Turing machine can determine whether it halts or not. But no Turing machine can take every program and determine whether or not each of them halts.

It isn't actually clear to me that you a Turing machine in the relevant sense, since there is no context where you would run forever without halting, and there are contexts where you will output inconsistent results.

But even if you are, it simply means that there is something undecidable to you -- the examples you find will be about other Turing machines, not yourself. There is nothing impossible about that, because you don't and can't understand your own source code sufficiently well.

0Houshalter
The program I specified is impossible to prove will halt. It doesn't matter what Turing machine, or human, is searching for the proof. It can never be found. It can't exist. The paradox is that I can prove that. Which means I can prove the program searching for proofs will never halt. Which I just proved is impossible.

I've seen this kind of thing happen before, and I don't think it's a question of demographics or sockpuppets. Basically I think a bunch of people upvoted it because they thought it was funny, then after there were more comments, other people more thoughtfully downvoted it because they saw (especially after reading more of the comments) that it was a bad idea.

So my theory it was a question of difference in timing and in whether or not other people had already commented.

It is definitely true that this could be someone's subjective probability, if he he doesn't understand the statement.

But if you do understand it, a thing which is logically impossible doesn't exist, so the probability that a thing doesn't exist will be equal to or higher than the probability that it is logically impossible.

0[anonymous]
I feel like I might understand now. Can I represent your points as follows: * all instances of things which are logically impossible also don't exist * therefore, there are more things which don't exist than those that are logically impossible Assuming statement 1 is correct, without accepting a further premise I don't feel compelled to accept the second premise. It sounds like things which are logically impossible may in fact be equivelant to things which don't exist, and vice-versa. And that sounds intuitively compelling. If something was logically possible, it would happen. If it is wasn't possible, it's not going to happen. Or, the agent's modelling of the world is wrong. Importantly, I don't accept premise 1, as I've indicated in another comment reply (something about how I find I'm wrong about the apparent impossibility of something, or possibility of something.)

Maybe. I upvoted it because I thought it was correct, and corrects the misconception that desiring to live forever is obviously the correct thing to do, and that everyone would want it if they weren't confused.

Note that unless the probability that you begin to want to die during a certain period of time is becoming continuously lower, forever, then you will almost surely begin to want to die sooner or later.

The post would have to be toned down quite a bit in order to appear to be possibly sincere.

4Dorikka
shrug The pdf for sincerity looks bimodal to me.
6Houshalter
I don't think that comment was sincere.
Alicorn150

I'm just used to the detractors misspelling or abbreviating "Yudkowsky", so this was jarring.

I use dtSearch for the text searching, which works pretty well. I don't have to use it constantly but it works well when I need it, e.g. finding something from a website I viewed a few months ago, when I no longer remember which site it was, or determining whether I've ever come across a certain's person's name before, finding one of my passwords after I've forgotten where I saved it, and so on. Also, sometimes I haven't been sure about which keywords to search for, but I was able to determine that something must have happened on a particular day, and then... (read more)

I do the screenshot / webcam thing, and OCR the screenshots so that my entire computing history is searchable.

2btrettel
How useful have the text logs proved? Assuming you use a terminal, do you also keep your history from that?

Yes, both of these happen. Also, it's harder to be friends even with the people you already know because you feel dishonest all the time (obviously because you are in fact being dishonest with them.)

If you are a Muslim in many Islamic countries today, and you decide that Islam is false, and let people know it, you can be executed. This does not seem to have a high expected value.

Of course, you could decide it is false but lie about it, but people have a hard time doing that. It is easier to convince yourself that it is true, to avoid getting killed.

1[anonymous]
It's really not that hard, especially in countries with institutionalized religions. Just keep going to mosque, saying the prayers, obeying the norms, and you've got everything most believers actually do, minus the belief.

I don't see why so many people are assuming that Aumann is accepting a literal creation in six days. I read the article and it seems obvious to me that he believes that the world came to be in the ordinary way accepted by science, and he considers the six days to be something like an allegory. There is no need for explanations like a double truth or compartmentalization.

2[anonymous]
It's not at all obvious what he believes. He didn't use the word allegory, metaphor, or anything similar in his explanation. He never said anything to indicate that he saw science as "true" and religion as "false". He simply said that they're two different ways to look at the world.

It should probably be defined by calibration: do some people have a type of belief where they are always right?

-1StellaAthena
You can phrase statements of logical deduction such that they have no premises and only conclusions. If we let S be the set of logical principles under which our logical system operates and T be some sentence that entails Y, then S AND T implies Y is something that I have absolute certainty in, even if this world is an illusion, because the premise of the implication contains all the rules necessary to derive the result. A less formal example of this would be the sentence: If the rules of logic as I know them hold and the axioms of mathematics are true, then it is the case that 2+2=4
0Lumifer
Self-referential and anthropic things would probably qualify, e.g. "I believe I exist".

Of course if no one has absolute certainty, this very fact would be one of the things we don't have absolute certainty about. This is entirely consistent.

Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty. This would mean that mathematical probability theory is an imperfect formalization of people's subjective degrees of belief. It would not necessarily mean that it is impossible in principle to come up with a better formalization.

1Lumifer
Errr... as I read EY's post, he is certainly talking about the mathematics of probability (or about the formal framework in which we operate on probabilities) and not about some "subjective sense". The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?

Yes, it won't work for people who can't manage a day without eating at least from time to time, although you can also try slowing down the rate of change.

As I said in another comment, changes in water retention (and scale flucuations etc.) don't really matter because it will come out the same on average.

3Lumifer
Volatility matters. Imagine that one day the temperature in your house was set to 50F (+10C) and the next day -- to 90F (+32C). On the average it comes out to 70F (+20C), so it's fine, right?

It doesn't matter. Fluctuations with scales and with water retention may mean that you may end up fasting an extra day here and there for random reasons, but you will also end up eating on extra days for the same reason. It ends up the same on average.

Technology frequently improves some things while making other things worse. But sooner or later people find a way to improve both the some things and the other things. In this particular case, maybe they haven't found it yet.

"But actually they weren't aware that they were not eating less."

This is why I advocate the method of using a Beeminder weight goal (or some equivalent), weigh yourself every day, and don't eat for the rest of the day when you are above the center line. When you are below it, you can eat whatever you want for the rest of the day.

This doesn't take very much willpower because there is a very bright line, you don't have to carefully control what or how much you eat, it's either you eat today or you don't.

1Lumifer
That has some issues. First, changes in water retention jitter your daily weight by a pound or two. Second, you assume good tolerance for intermittent fasting. If you weight yourself in the morning, decide you're not going to eat for the whole day, and then suffer a major sugar crash in the afternoon, that will be problematic.
2Jiro
Do scales actually work with enough accuracy that doing this even makes any sense?

I don't think I understand. What is the third possible environment? And why exactly is the behavior stupid? It sounds like it might be actually true that it is too dangerous to test whether you are in Heaven or Hell in that situation.

2Stuart_Armstrong
Sorry, "two" possible environments, not three. The point is that Hell can be made arbitrarily likely by our choice of computing language - even in more complex environments, like our own world, thus creating an agent that does nothing (or that follows any particular policy).

"Imagine a world in which no one was living below the average income level."

This is a world where everyone has exactly the same income. I don't see any special reason why it would be desirable, though.

-1IffThen
That was sort of my point. Most people are going to imagine it as a more perfect world. But if they were to think through all of the implications, they would see that it probably involves massive taxation and a very very strong central government, with less motivation for people to do dirty and difficult jobs. They want something they can't, or don't, accurately imagine.

Probability that there are two elephants given one on the left and one on the right.

In any case, if your language can't express Fermat's last theorem then of course you don't assign a probability of 1 to it, not because you assign it a different probability, but because you don't assign it a probability at all.

1Ronny Fernandez
I agree. I am saying that we need not assign it a probability at all. Your solution assumes that there is a way to express "two" in the language. Also, the proposition you made is more like "one elephant and another elephant makes two elephants" not "1 + 1 = 2". I think we'd be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.

Basically the problem is that a Bayesian should not be able to change its probabilities without new evidence, and if you assign a probability other than 1 to a mathematical truth, you will run into problems when you deduce that it follows of necessity from other things that have a probability of 1.

1KnaveOfAllTrades
Why can't the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I've found a factor to update downwards. This feels like it relies on the reasoner's embedding though, so maybe it's cheating, but it's not clear and non-confusing to me why it doesn't count.
0Ronny Fernandez
How do you express, Fermat's last theorem for instance, as a boolean combination of the language I gave, or as a boolean combination of programs? Boolean algebra is not strong enough to derive, or even express all of math. edit: Let's start simple. How do you express 1 + 1 = 2 in the language I gave, or as a boolean combination of programs?

Any program that reads this post and these articles wasn't stuck in a sandbox anyway.

1ike
Offline internet dumps are a thing.
4TrE
I'm pretty sure "Humans, please ignore this post" wasn't serious, and this article is mainly for humans.

I would agree with the social norm of never ever going swimming. In fact, I have a very hard time understanding why people are so willing to basically immerse themselves in an environment so deadly to human beings. I certainly never do it myself.

3Lumifer
There is a difference between "I don't want to do X" and "I don't want other people to do X". Desiring your peculiarities to become social norms is ill-advised, I'd say. You might find other people's peculiarities to be not to your liking.

You are assuming that human beings are much more altruistic than they actually are. If your wife has the chance of leaving you and having a much better life where you will never hear from her again, you will not be sad if she does not take the chance.

If the other player is choosing randomly between two numbers, you will have a 50% chance of guessing his choice correctly with any strategy whatsoever. It doesn't matter whether your strategy is random or not; you can choose the first number every time and you will still have exactly a 50% chance of getting it.

0EngineerofScience
But you want to be purely unpredictable or the opponent( if they are a super ai) would gradually figure out your strategy and have a slightly better chance. A human(without tools) can't actully generate a random number. If your opponent was guessing a non-completely random number/ a "random" number in their head, then you want your choice to be random. I should have said if the opponent chooses a non-completely random number then you should randomly determine your number.

That is not a useful rebuttal if in fact it is impossible to guarantee that your AGI will not be a socialpath no matter how you program it.

Eliezer's position generally is that we should make sure everything is set in advance. Jacob_cannell seems to be basically saying that much of an AGI's behavior is going to be determined by its education, environment, and history, much as is the case with human beings now. If this is the case it is unlikely there is any way to guarantee a good outcome, but there are ways to make that outcome more likely.

If you are "procrastinate-y" you wouldn't be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.

Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/

I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.

I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not... (read more)

[anonymous]100

On the same note, but probably already widely known, Scott Aaronson on "The Signularity Is Far" (2008): http://www.scottaaronson.com/blog/?p=346

1TheAncientGeek
Now, that's what I was looking for.

Human beings are not very willing to be rational, and that includes those of us on Less Wrong.

If you're really honest about your willingness to be rational, it seems like this could be kind of depressing.

0cameroncowan
We aren't always rational, we do things that make us comfortable and keep us safe and able to function. I think if we died our ego and super ego to rationality we all might as well have a mass suicide party and go together because that's about the mood we would all be in.
1ThePrussian
Could you expand on that? I'm not sure I follow...

I actually meant it more generally, in the sense of highly unusual situations. So gjm's suggested path would count.

But more straightforwardly apocalyptic situations could also work. So a whole bunch of people die, then those remaining become concerned about existential risk -- given what just happened -- and this leads to people becoming convinced Zoltan would be a good idea. This is more likely than a virus that kills non-Zoltan supporters.

Caricatures such as describing people who disagree with you as saying "let's bring back slavery" and supporting "burning down the whole Middle East" are not productive in political discussions.

1Acty
I'm not trying to describe the people who disagree with me as wanting to bring back slavery or supporting burning down the whole Middle East; that isn't my point and I apologise if I was unclear. As I understood it, the argument levelled against me was that: people who say they're really angry about terrorism are often idiots who hold idiotic beliefs, like, "let's send loads of tanks to the Middle East and kill all the people who might be in the same social group as the terrorists and that will solve everything!" and in the same way, people who say they're really angry about racism are the kind of people who hold idiotic beliefs like "let's ban all science that has anything to do with race and gender!" and therefore it was reasonable of them to assume, when I stated that I was opposed to racism, that I was the latter kind of idiot. To which my response is that many people are idiots, both people who are angry about terrorism and people who aren't, people who are angry about racism and people who aren't. There are high levels of idiocy in both groups. Being angry about terrorism and racism still seems perfectly appropriate and fine as an emotional arational response, since terrorism and racism are both really bad things. I think the proper response to someone saying "I hate terrorism" is "I agree, terrorism is a really bad thing", not "But drone strikes against 18 year olds in the middle east kill grandmothers!" (even if that is a true thing) and similarly, the proper response to someone saying "I hate racism" is "I agree, genocide and lynchings are really bad", not "But studies about race and gender are perfectly valid Bayesian inference!" (even if that is a true thing).

I tried to register there just now but the email which is supposed to contain the link to verify my email is empty (no link). What can I do about it?

I think this is probably true, and I have seen cases where e.g. Eliezer is highly upvoted for a certain comment, and some other person little or not at all for basically the same insight in a different case.

However, it also seems to me that their long comments do tend to be especially insightful in fact.

I don't think this would be helpful, basically for the reason Lumifer said. In terms of how I vote personally, if I consider a comment unproductive, being longer increases the probability that I will downvote, since it wastes more of my time.

That isn't really fully general because not everything is evidence in favor of your conclusion. Some things are evidence against it.

0Gunnar_Zarncke
It is fully general at least in the sense that it admits a weak response which at the same time simulates compromise and weaking the other position.

For many people, 32 karma would also be sufficient benefit to justify the investment made in the comment.

You can pretty easily think of "apocalyptic" scenarios in which Zoltan would end up getting elected in a fairly normal way. Picking a president at random from the adult population would require even more improbable events.

0jacob_cannell
I loved this comment, but then realized I may not have understood it - is the apocalyptic scenario one where a bunch of people die, but somehow those remaining tend to be Zoltan supporters?

A common one that I see works like this: first person holds position A. A second person points out fact B which provides evidence against position A. The first person responds, "I am going to adjust my position to position C: namely that both A and B are true. B is evidence for C, so your argument is now evidence for my position." Continue as needed.

Example:

First person: The world was created. Second person: Living things evolved, which makes it less likely that things were created than if they had just appeared from nothing. First person: The wo... (read more)

3TheAncientGeek
This is counterable, by pointing out that movement has occurred. If done honestly, it constitutes convergence, which is arguably desirable,

Ok. My link was also for the USA and you are correct that there would be differences in other countries.

This sounds like Robin Hanson's idea of the future. Eliezer would probably agree that in theory this would happen, except that he expects one superintelligent AI to take over everything and impose its values on the entire future of everything. If Eliezer's future is definitely going to happen, then even if there is no truly ideal set of values, we would still have to make sure that the values that are going to be imposed on everything are at least somewhat acceptable.

Load More