Comment author: Pimgd 26 July 2016 09:36:07AM *  1 point [-]

because producing new evidence is not possible anymore.

Okay...

So, say it turns out that, well, Eve is irrational. Somehow.

Now what? Do we go "neener-neener" at her? What's the point? What's the use that you could get out of labeling this behavior irrational?

Suppose Adam dies and is cryo-frozen. During Eve's life, there will be no resuscitation of Adam. Sometime afterward, however, Omega will arrive, deem the problem interesting and simulate Adam via really really really advanced technology.

Turns out he didn't do it.

Is she now rational because, well, turns out she was right after all? Well, no, because getting the right answer for the wrong reasons is not the rational way to go about things (in general, it might help in specific cases if you need to get the answer right but don't care how).

....

Actually, let me just skip over a few paragraphs I was going to write and skip to the end.

You cannot have 100% confidence interval. Because then your belief is set in stone and it cannot change. You can have a googleplex nines if you want, but not 100% confidence.

Fallacy of argument from probability (if it can happen then it must happen) aside; How is it rational to discard a belief you are holding on shaky evidence if you think with near absolute certainty that no more evidence will arrive, ever? What will you do when there is more evidence? (Hint: Meeting Adam's mother at the funeral and hearing childhood stories about what a nice kid he was is more evidence for his character, albeit very weak evidence - and so are studies that show that certain demographics of the timeperiod that Adam lived in had certain characteristics) You gotta update! (I don't think that fallacy I mentioned applies; if it does, we can fix it with big numbers; if you are to hold this belief everywhere, then... the probabilities go up as it turns from "in this situation" to "in at least one of all these situations")

So to toss a belief aside because you think there will be no more evidence is the wrong action to me. You can park a belief. That is to take no action. Maintain status quo. No change in input is no change in output. But you do NOT clear the belief.

Let me put up a strawman - I'll leave it up to others to see if there's something harder underneath - if you hold this action - "I think there will be no more evidence, and I am not very confident either way, so I will discard the output" to be the rightful action to take, how do you prevent yourself from getting boiled like a frog in a pan (yes, that's a false story - still, I intend the metaphorical meaning: how do you stop yourself from discarding every bit of evidence that comes your way, because you "know" there to be no more evidence?)

In my opinion, to do as you say weakens or even destroys the gradual "update" mechanism. This leads to less effective beliefs, and thus is irrational.


Were we to now look at the 3 questions, I'd answer..

Again, Eve is irrational because she says it cannot be falsified. If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational.

The second question, yes via this specific example. Here it can, thus it can.

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing. As in, gee, a couple thousand years ago, something truly fantastic happened - it was God! I saw it with my own eyes! You can keep believing there was, at that point in time, an entity causing this fantastic thing. Until you get other evidence, which may never happen. What you CANNOT do is say, "hey, maybe this 'God' that caused this one fantastic thing is also responsible for creating the universe and making my neighbor win the lottery and my aunt get cancer and ..." That's unloading a huge complexity on an earlier belief without paying appropriate penalties.

You don't only need evidence that the fantastical events were caused, you also need evidence they were caused by the same thing if you wish to attribute them to that same thing.

Comment author: Wind 28 July 2016 11:36:56AM 0 points [-]

You don't only need evidence that the fantastical events were caused, you also need evidence they were caused by the same thing if you wish to attribute them to that same thing.

Assume I observe X, Y, Z and form three hypotheses

  • A: All of X, Y, Z had causes
  • B: All of X, Y, Z had different causes
  • C: All of X, Y, Z had the same cause

A obviously has highest probability since it includes B and C as special cases. However, which one of B and C do you think should get complexity penalty over the others?

In you story:

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing. As in, gee, a couple thousand years ago, something truly fantastic happened - it was God! I saw it with my own eyes! You can keep believing there was, at that point in time, an entity causing this fantastic thing. Until you get other evidence, which may never happen. What you CANNOT do is say, "hey, maybe this 'God' that caused this one fantastic thing is also responsible for creating the universe and making my neighbor win the lottery and my aunt get cancer and ..." That's unloading a huge complexity on an earlier belief without paying appropriate penalties.

The relevant comparison is: Given that God did X, what is the probability that God also did Y and Z, verses God did not do those things.

P(God did Y, Z | God did X) = P(God did X,Y, Z) / P(God did X)

v.s.

P(God did not do Y, Z | God did X) = P(God did X, and something other than God did Y, Z) / P(God did X)

I am uncertain about how to correctly apply complexity penalty, but I do believe that the multi explanation model "God did X, and something other than God did Y, Z" should get complexity penalty over the sing explanation model "God did X, Y, Z".

The belief "God caused some tiny thing, a couple thousand years ago", should correlated with the belief "God did this big thing right now". This is why I firmly believe that God did not cause some tiny thing, a couple of thousand years ago.

Comment author: Wind 27 July 2016 11:39:42PM 1 point [-]

Saying "X is emergent" is conveying some information, if there is someone in the room that does not already know this fact. Here is an example:

Quarks are emergent.

This is not an explanation though. It is more like a anti-explanation. I just claimed that there is an underlying explanation to quarks, and then stopped. I told you to make space for an explanation, in you mental world model, and then I left you with that space empty. If you believed my statement, and if you don't already know how quarks emerges and from what, I just made an explanation shaped hole in your mind. This is not nice of me.

But at least you now know that there is an explanation to be found. When you thought quarks was fundamental, you did not even know to look, because fundamental things can not be explained, only described.

Comment author: hairyfigment 06 June 2016 06:44:06AM 0 points [-]

While you could be right, you're claiming the students have a clear and accurate model of their own beliefs. What does that mean? Could they explain the nature of technical explanation?

There's a certain popular series of books which portrays intelligence as a matter of parroting facts, without trying to connect any two of them - not even the disappearance of every child in the world and an event in world politics shortly thereafter. Now, you could try to explain this by saying the authors (plural) are deliberately selling their customers garbage to maximize return-on-investment. And you could try to claim that their fans are buying the books just to signal tribal membership. And that explanation may have some power - but I draw the line at saying that they all understand clearly what the books lack in terms of credibility.

Comment author: Wind 27 July 2016 09:44:56PM *  0 points [-]

This answer have taken some time because I wanted to read the link you gave, before writing back. I still have not read it most of it, but I think I have read enough to get your reference.

I can't comment on the books, because I have no idea what series you are talking about. But your tone do suggest that you expect me to know what series this is? I am guessing that these books are very popular in USA, but more or less completely unheard of in Europe? Probably something with at Christian theme?

I know for certain that there exists students in the world, that uses teachers password, or similar techniques, fully knowing of what they do. I am slowly accepting that there probably also exists people who think they know stuff when all they have is a statement they do not actually understand. I have currently no good estimate as to which of these are most common.

As to weather the first type of students could explain the nature of technical explanation? Why do you mean exactly? I am not absolutely sure about what Yudkowsky mean by this concept, but that only mean that I am uncertain about his mind, not about my own.

To me, an explanation does not feel like an explanation, unless I understand all the bits, and I can not remember ever thinking differently. If I would be told for the first time that light is a wave, then I would try to fit my current best understanding of light with my current best understanding of wave, to try to figure out what "light is a wave" could possibly mean, and then I would ask for more information, because that is clearly not an explanation. This must have happened at some time, even though I can't remember the exact event. I do know that something in my childhood triggered me to want to know more about water waves.

For me it is really hard to imagine that anyone could confuse a teachers password with knowledge, which makes me biased towards other explanations. So maybe I am wrong. But also, do not underestimate peoples willingness to knowingly use tricks to pas a class, or get better grades. Here are two examples that I rememberer classmates openly talking about:

  • Use extra sloppy handwriting to council spelling mistakes.
  • On an open question, just write everything you can think of, that seems at least semi relevant and hope that you included what ever the teacher was getting at, some where in there.
Comment author: Wind 11 June 2016 01:06:58PM *  0 points [-]

I totally, I agree that it is often better to study narrow and deep.

But this word policing is not net helpful, all things coincided.

Yudkovsky does not like when people call the invention of LSD a Singularity. Ok, I can see why. But I don't like Yudkovsky use of the word singularity, because that is absolutely not what the word means in physics or math. I used to be quite upset over the fact that AI people had generalized the word "singularity" to mean "exponential or super exponential growth". On the other hand, what ever. It is really not that big of a deal. I will have to say "mathematical singularity" some times, to specify what I mean, when ever it is not clear from the context. I can live with that compromise.

Different fields use the same word to mean different things. This some times leads to misunderstanding, which is bad. But the alternative would be for every field to make up their own strings of syllables for every technical word, which is just too unpractical.

Also, I happen to know, that when astrophysicists talk about the evolution of stars, they are not borrowing the word "evolution" from the biological use. They are using "evolution" in the more original meaning, which is "how something change over time", from the word "evolve". The evolution of a star is the process of how the star change over time, from creation to end. No one in the field thinks that they should borrow ideas from biology on the ground that biologists use the same word. Nether can I imagine anyone in evolutionary biology deciding to draw conclusions from theories of the evolution of starts, just because of the common word "evolution".

I can totally imagine someone how knows close to nothing about both stars and biology, being confused by this word "evolution" being used in different settings. Confusing the uneducated public is generally bad. More specifically it is uncooperative, since in most field, you yourself is part of the uneducated public. But there is also a trade-off. How much effort should we put on this? Avoiding otherwise useful use of words is a high cost.

The Singularity, Quantum Cromodynamics, Neural networks, Tree (in graph theory), Imaginary numbers, Magic (as placeholder for the un-explained), Energy, etc.

The use of metaphors and other types of borrowed words in technical language is widespread, because it is so damned practical. Sometimes we use metaphors the same way as good poet, to lend the preciseness from one concept to another. But sometimes one just needs a label and reusing an old word is less effort than coming up with, and remember, an actual new sound.

Back to the trade-off. How much would it help if different topics never borrowed language of each other? Would the general public be significantly less confused? For this tactic to work, everyone, not just scientists, has to stop borrowing words of each other. And we have to the restrict usage of hundreds (maybe thousands) of words that are already in use.

But maybe there is a third way? Instead of teaching everyone not to borrow words, we could teach everyone that words can have different meanings in different context. This is also a huge project, but considerably smaller for several reasons.

  1. It is an easier lesson to learn. At least for me, and generalizing from one example.
  2. It is more aligned with how natural language actually work.
  3. It is a lesson that can be tough one person a the time. We don't have to change all at once, for it to work.

My model of Yudkowsky (which is created solely from reading many of his LessWrong posts) now complains that my suggestion will not work, because of how the brain work. Using the same words causes our brain to use the same mental bucket, or something like that.

But I know that my suggestion works, at least it works for me. My brain have different mental settings for different topics and situations, where words can have different meaning in different settings. It does not mean that I have conflicting mental models of the world, just that I keep conflicting definitions of words. It is very much like switching to a different language. The word "bard" means shed in English, but it means child in my native language, Swedish, and this is not a problem. I would never even have connected the English::barn and Swedish::barn, if it was not pointed out to me in a totally unrelated discussion.

Unfortunately I don't know how my brain ended up like this, so I can't show you the way. I can only testify that the destination exists. But if I where to guess, I would say that, I just gradually built up different sets of technical vocabulary, which sometimes had overlapping sounds. Maybe being bi-lingual helps? Not overly thinking in words probably helps too.

Sometimes when a conversation is sliding from one topic to an other, maybe a physics conversation take a turn in to pure math, I will notice that my brain have switched language setting, because the sentence I remember just saying, does not make sense to me anymore.

Comment author: Chrysophylax 29 January 2013 07:21:43PM -1 points [-]

Large corporations are not really very like AIs at all. An Artificial Intelligence is an intelligence with a single utility function, whereas a company is a group of intelligences with many complex utility functions. I remain unconvinced that aggregating intelligences and applying the same terms is valid - it is, roughly speaking, like trying to apply chromodynamics to atoms and molecules. Maximising shareholder value is also not a simple problem to solve (if it were, the stock market would be a lot simpler!), especially since "shareholder value" is a very vague concept. In reality, large corporations almost never seek to maximise shareholder value (that is, in theory one might, but I can't actually imagine such a firm). The relevant terms to look up are "satisficing" and "principal-agent problem".

This rather spoils the idea of firms being intelligent - the term does not appear applicable (which is, I think, Eliezer's point).

Comment author: Wind 11 June 2016 10:01:26AM 1 point [-]

How said anything about AI?

Super Intelligence = A General intelligence, that is much smarter than any human.

I consider my self to be an intelligence, event though my mind is made of many sub-processes, and I don't have a stable coherent utility function (I am still working on that).

The relevant questions are: It is sometimes useful to model corporations as single agents? - I don't know. Are corporations much smarter than any human? - No, they are not.

I say "sometimes useful", because, some other time you would want to study the corporations internal structure, and then it is defiantly not useful to see it as one entity. But since there are no fundamental indivisible substance of intelligence, any intelligence will have internal parts. Therefore having internal parts can not be exclusive to being an intelligent agent.

Comment author: Vaniver 01 June 2016 09:54:15PM 1 point [-]

Is the claim that this is a school thing or a life thing?

This is a life thing. One programming example might be people running code they've copy-pasted off of StackOverflow to see if it solves their problem--they don't understand what it will do, but they have a vague hope that it will be the magic incantation that will do what they want it to do.

But even there they may have a sense that programming has some objectivity to it. Probably a better example is dysfunctional organizational dynamics, where guessing what the boss wants you to say serves you better than trying to estimate what best accomplishes organizational goals.

"If I write 'because of heat conduction' on a test, I have a chance of getting points." is an anticipation controller.

Right, but read this section again:

This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher's password.

Guessing the teacher's password is obviously a hypothesis--but it's a hypothesis about the teacher, not the plate.

Comment author: Wind 05 June 2016 09:41:48PM 1 point [-]

I am unsure if we are disagreeing or not. I think that it is bad if the system encourage people to go for the wrong incentives. My point is that, I believe that people know when they are hacking the system. I think that the students themselves know that their hypothesis is about the teacher and not the plate.

This is a life thing. One programming example might be people running code they've copy-pasted off of StackOverflow to see if it solves their problem--they don't understand what it will do, but they have a vague hope that it will be the magic incantation that will do what they want it to do.

If my goal is to just make the program work, then copy-past from StackOverflow might be a good idea. As long as I know what I am doing, and don't fool myself in to thinking that I understood what I just copy pasted, I don't see the problem.

I have done a little amateur programming and I admit that I have used this method. Of course I would prefer to understand everything, but at one point of an other, I just wanted some lines to do X for me, so that I could get to the part of the code that I was actually interested in.

Probably a better example is dysfunctional organizational dynamics, where guessing what the boss wants you to say serves you better than trying to estimate what best accomplishes organizational goals.

Yes, that is a good life example. However, in this example I think that it is even more clear that the employee has accurate beliefs about the world. The error is with the system, not with the employee.

This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher's password.

Guessing the teacher's password is obviously a hypothesis--but it's a hypothesis about the teacher, not the plate.

I agree with you, Vaniver, as you say: "it's a hypothesis about the teacher" But I disagree with Yudkowsky. A belief about the teacher is a proper belief.

Yudkowsky claims that Guessing the teacher's password is a behavior that occurs because the student does not understand their own knowledge or lack there of.

I claims that Guessing the teacher's password is an example of perverse instantiation. The students have correct beliefs and are doing the rational thing, given their incentive structure. They don't think that they understand heat conduction, and they don't care, because understanding heat conduction is not their goal. Their goal is to get acceptable grades with minimum amount of effort.

Using proxi-intensives works badly on intelligent agents, even if they are made out of flesh.

Comment author: Wind 30 May 2016 12:37:33AM 1 point [-]

I notice that I am confused by this post.

Is the claim that this is a school thing or a life thing? I can see how this behavior might happen if a student is more interested in getting good grades than in actual learning. In such a situation "learning the teachers password" might be a short cut to get to your actual goals.

If the claim is that this is a life thing, could some one give me some more non-classroom example? Organized religion counts as classroom.

When I fist heard that light is a wave, then I interpreted that sentence in my brain an gave it meaning. I can't say for sure that I gave it the correct meaning. But I defiantly know that I did not just save a way the sound pattern, as truth. Because I don't think that way, and I can't even imagine thinking that way.

I can, on the other hand, imagine thinking: "If I write 'because of heat conduction' on a test, I have a chance of getting points." This is not how I went though school, because I was interested in accusal learning, but I can model a student who thinks this way.

"If I write 'because of heat conduction' on a test, I have a chance of getting points." is an anticipation controller.

Comment author: false_vacuum 18 February 2011 01:44:46PM 3 points [-]

This is a mistake. There is actually a two-electron state in the OP. (And there is no assumption 'that they are independently and individually real.' The claim is merely that the two-electron state is real.)

Comment author: Wind 12 April 2016 12:51:57AM *  1 point [-]

I am with pudge on this.

The current deepest level of understanding of physics is quantum field theory, and according to that theory there are no such things as particles, fundamentally. The only thing that exists are quantum fields. (Except gravity, but I will ignore that huge problem for now, because I don't think it is important for this discussion.)

The two particle state belongs to the Fock space formulation, that you get when tailor expanding quantum fields. This is not to say that the two particle space is not a real possibility. To my mest understanding of the math involved, there is a quantum field configuration that is exactly the two electron state. But the two electrons here are NOT two separate objects.

The philosophers mistake is not about whether two objects can be proven to be exactly identical. The philosophers mistake is in thinking that two electrons are different objects. From now on I will steel man the philosopher a bit and assume that what he ment was "fundamental objects" and not "electrons". He was just not up to date with the latest ontology and though though that "electron" was an example of "fundamental object", but has now updated his statement to be about actual things, and not mare emergent phenomena such as individual particles.

All the quantum fields in the standard model clearly have different properties. Different charges, different mas, etc. But it is not inconceivable, with in this model, to have two identical, but separate objects. There are probably quantum fields that has not yet been detected, because of week charge and/or high mas. It is possible that in the future we will find two new quantum fields, that, to the limit of our technology, are identical. Maybe later when we discover that all the quantum fields are just aspects of some deeper level, then we might be able to prove that those two quantum fields are identical. But in the same stroke we will also find that these fields are not actually separate objects.

In the end the philosopher will still be correct.

What ever your deepest level of understanding is, you will always have to go one level deeper before you can prove that two "different objects" are identical in every way.
"different objects" = things that appear to be different object at the previews deepest level of understanding.

One could argue that the philosopher is wrong if there are no bottom level of physics, because by talking about fundamental objects he kind of assumes that there exist such things. If the problem of physics i bottomless then that assumption is wrong. However, I see no reason to believe that physics is bottomless.

Comment author: IL 14 April 2008 09:38:22AM 11 points [-]

But the experiment does'nt prove that the two photons are really identical, it just proves that the photons are identical as far as the configurations are concerned. The photons could still have tiny tags with a number on them, but for some reason the configurations don't care about tags.

Comment author: Wind 12 April 2016 12:04:02AM 0 points [-]

Yes, technically, you could maybe do that. At least as long as you don't have two photons occupying the same state, in which case I am unsure.

However, your generat quantum state does not have a precis number of photons. So before you can start flag anything, you would have to express the quantum state as a sum of states that has an exact number of photons. Then you could, separately for each term in that sum, label each photon. And then, a moment later, you would have to do that all over again. Because you cannot track over time which photon is which.

So why would you go in to all that trouble to invent an epiphonomena?

Comment author: Wind 10 April 2016 01:57:08PM *  0 points [-]

This is Awesome! Exactly what I want!

Two questions:

1̶)̶ ̶I̶t̶ ̶s̶e̶e̶m̶s̶ ̶l̶i̶k̶e̶ ̶I̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶p̶a̶y̶ ̶b̶y̶ ̶p̶a̶y̶ ̶p̶a̶l̶.̶ ̶O̶n̶l̶y̶ ̶I̶ ̶c̶a̶n̶'̶t̶ ̶b̶e̶c̶a̶u̶s̶e̶ ̶a̶p̶p̶a̶r̶e̶n̶t̶l̶y̶ ̶m̶y̶ ̶S̶w̶e̶d̶i̶s̶h̶ ̶p̶h̶o̶n̶e̶ ̶n̶u̶m̶b̶e̶r̶ ̶i̶s̶ ̶n̶o̶t̶ ̶v̶a̶l̶i̶d̶.̶ ̶I̶ ̶t̶r̶i̶e̶d̶ ̶b̶o̶t̶h̶ ̶+̶4̶6̶7̶6̶8̶4̶9̶6̶4̶X̶X̶ ̶a̶n̶d̶ ̶0̶0̶4̶6̶7̶6̶8̶4̶9̶6̶4̶X̶X̶ ̶a̶n̶d̶ ̶n̶e̶t̶h̶e̶r̶ ̶w̶o̶r̶k̶s̶.̶ ̶(̶X̶X̶ ̶=̶ ̶t̶h̶e̶ ̶t̶w̶o̶ ̶l̶a̶s̶t̶ ̶d̶i̶g̶i̶t̶s̶ ̶o̶f̶ ̶m̶y̶ ̶p̶h̶o̶n̶e̶ ̶n̶u̶m̶b̶e̶r̶.̶)̶
Edit: This problem has been solved

2) This post is rather old. Is there more sequences available on podcast by now. How do I find them?

View more: Prev | Next