Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 08 July 2017 03:53:51PM 0 points [-]

"Every change of epistemic pressure ΔP is equal to the change in a person's strategies ΔS times their psychological resistance R."

I don't think that's a good way to think about it. More pressure doesn't always lead to more chance. It's about applying the right amount of pressure at the right point at the right time.

Comment author: sen 08 July 2017 08:49:24PM 0 points [-]

I don't see how your comment contradicts the part you quoted. More pressure doesn't lead to more change (in strategy) if resistance increases as well. That's consistent with what /u/SquirrelInHell stated.

Comment author: sen 08 July 2017 08:29:25AM *  0 points [-]

That mass corresponds to "resistance to change" seems fairly natural, as does the correspondence between "pressure to change" and impulse. The strange part seems to be the correspondence between "strategy" and velocity.'' Distance would be something like strategy * time.

Does a symmetry in time correspond to a conservation of energy? Is energy supposed to correspond to resistance? Maybe, though that's a little hard to interpret, so it's a little difficult to apply Lagrangian or Hamiltonian mechanics. The interpretation of energy is important. Without that, the interpretation of time is incomplete and possibly incoherent.

Is there an inverse correspondence between optimal certainty in resistance * strategy (momentum) and optimal certainty in strategy * time (distance)? I guess, so findings from quantum uncertainty principles and information geometry may apply.

Does strategy impact one's perception of "distances" (strategy * time) and timescales? Maybe, so maybe findings from special relativity would apply. A universally-observable distance isn't defined though, and that precludes a more coherent application of special/general relativity. Some universal observables should be stated. Other than the obvious objectivity benefits, this could help more clearly define relationships between variables of different dimensions. This one isn't that important, but it would enable much more interesting uses of the theory.

Comment author: sen 05 July 2017 04:55:03AM *  0 points [-]

The process you went through is known in other contexts as decategorification. You attempted to reduce the level of abstraction, noticed a potential problem in doing so, and concluded that the more abstract notion was not as well-conceived as you imagined.

If you try to enumerate questions related to a topic (Evil), you will quickly find that you (1) repeatedly tread the same ground, (2) are often are unable to combine findings from multiple questions in useful ways, and (3) are often unable to identify questions worth answering, let alone a hierarchy that suggests which questions might be more worth answering than others.

What you are trying to identify are the properties and structure of evil. A property of Evil is a thing that must be preserved in order for Evil to be Evil. The structure of Evil is the relationship between Evil and other (Evil or non-Evil) entities.

You should start by trying to identify the shape of Evil by identifying its border, where things transition from Evil to non-Evil and vice versa. This will give you an indication of which properties are important. From there, you can start looking at how Evil relates to other things, especially in regards to its properties. This will give you some indication of its structure. Properties are important for identifying Evil clearly. Structure is important for identifying things that are equivalent to Evil in all ways that matter. It is often the case that the two are not the same.

If you want to understand this better, I recommend looking into category theory. The general process of identifying ambiguities, characterizing problems in the right way, applying prior knowledge, and gluing together findings into a coherent whole is fairly well-worn. You don't have to start from scratch.

Comment author: sen 05 July 2017 12:08:35AM *  1 point [-]

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, ..."

I don't see how the existence of subagents complicates things in any substantial way. If the existence of competing subagents is a hindrance to optimality, then one should aim to align or eliminate subagents. (Isn't this one of the functions of meditation?) Obviously this isn't always easy, but the goal is at least clear in this case.

It is nonsensical to treat animal welfare as a special case of happiness and suffering. This is because animal happiness and suffering can be only be understood through analogical reasoning, not through logical reasoning. A logical framework of welfare can only be derived through subjects capable of conveying results since results are subjective. The vast majority of animals, at least so far, cannot convey results, so we need to infer results on animals based on similarities between animal observables and human observables. Such inference is analogical and necessarily based entirely on human welfare.

If you want a theory of happiness and suffering in the intellectual sense (where physical pleasure and suffering are ignored), I suspect what you want is a theory of the ideals towards which people strive. For such an endeavor, I recommend looking into category theory, in which ideals are easily recognizable, and whose ideals seem to very closely (if not perfectly) align with intuitive notions.

Comment author: adamzerner 02 July 2017 10:35:54AM 0 points [-]

By using the word "just", it gives me the impression that you think it's easy to not get lost. In my experience with other fields, it is easy to get lost, and I would assume that the same is true with machine learning.

Comment author: sen 02 July 2017 01:05:51PM *  0 points [-]

I meant it as "This seems like a clear starting point." You're correct that I think it's easy to not get lost with those two starting points.

I'm my experience with other fields, it's easy to get frustrated and give up. Getting lost is quite a bit more rare. You'll have to click through a hundred dense links to understand your first paper in machine learning, as with any other field. If you can trudge through that, you'll be fine. If you can't, you'll at least know what to ask.

Also, are you not curious about how much initiative people have regarding the topics they want to learn?

Comment author: sen 02 July 2017 08:38:56AM 0 points [-]

A question for people asking for machine learning tutors: have you tried just reading through OpenAI blog posts and running the code examples they embed or link? Or going through the TensorFlow tutorials?

Comment author: username2 26 June 2017 08:54:17PM 2 points [-]

Are you an avid reader of non-fiction books outside your field of work? If so, how do you choose which books to read?

Comment author: sen 02 July 2017 01:12:58AM *  0 points [-]

Yes. I follow authors, I ask avid readers similar to me for recommendations, I observe best-of-category polls, I scan through collections of categorized stories for topics that interest me, I click through "Also Liked" and "Similar" links for stories I like. My backlog of things to read is effectively infinite.

Comment author: TezlaKoil 01 July 2017 10:40:35PM *  2 points [-]

In a topological space, defining

  1. X ∨ Y as X ∪ Y
  2. X ∧ Y as X ∩ Y
  3. X → Y as Int( X^c ∪ Y )
  4. ¬X as Int( X^c )

does yield a Heyting algebra. This means that the understanding (but not the explanation) of /u/cousin_it checks out: removing the border on each negation is the "right way".

Notice that under this interpretation X is always a subset of ¬¬X.:

  1. Int(X^c) is a subset of X^c; by definition of Int(-).
  2. Int(X^c)^c is a superset of X^c^c = X; since taking complements reverses containment.
  3. Int( Int(X^c)^c ) is a superset of Int(X) = X; since Int(-) preserves containment.

But Int( Int(X^c)^c ) is just ¬¬X. So X is always a subset of ¬¬X.

However, in many cases ¬¬X is not a subset of X. For example, take the Euclidean plane with the usual topology, and let X be the plane with one point removed. Then ¬X = Int( X^c ) = ∅ is empty, so ¬¬X is the whole plane. But the whole plane is obviously not a subset of the plane with one point removed.

Comment author: sen 02 July 2017 12:28:44AM 1 point [-]

I see. Thanks for the explanation.

Comment author: cousin_it 01 July 2017 08:46:04AM *  0 points [-]

Yeah, I mentioned the topology complications.

If you remove the border on each negation "A is a subset of Not Not A" is false under your method, though it should yield true.

How so? I thought removing the border on each negation was the right way. (Also you need to start out with no border, basically you should have open sets at each step.)

Lambda calculus is indeed a nice way to understand intuitionism, that's how I imagined it since forever :-) Also the connection between Peirce's law and call/cc is nice. And the way it prevents confluence is also really nice. This stackoverflow question has probably the best explanation.

Comment author: sen 01 July 2017 08:55:23PM *  0 points [-]

How so? I thought removing the border on each negation was the right way.

I gave an example of where removing the border gives the wrong result. Are you asking why "A is a subset of Not Not A" is true in a Heyting algebra? I think the proof goes like this:

  • (1) (a and not(a)) = 0
  • (2) By #1, (a and not(a)) is a subset of 0
  • (3) For all c,x,b, ((c and x) is a subset of b) = (c is a subset of (x implies b))
  • (4) By #2 and #4, a is a subset of (not(a) implies 0)
  • (5) For all c, not(c) = (c implies 0)
  • (6) By #4 and #5, a is a subset of not(not(a))

Maybe your method is workable when you interpret a Heyting subset to be a topological superset? Then 1 is the initial (empty) set and 0 is the terminal set. That doesn't work with intersections though. "A and Not A" must yield 0, but the intersection of two non-terminal sets cannot possibly yield a terminal set. The union can though, so I guess that means you'd have to represent And with a union. That still doesn't work though because "Not A and Not Not A" must yield 0 in a Heyting algebra, but it's missing the border of A in the topological method, so it again isn't terminal.

I don't see how the topological method is workable for this.

Comment author: cousin_it 28 June 2017 07:59:53PM *  0 points [-]

Yeah I know. I'm only looking at it now because intuitionistic logic can't be reduced to finite truth tables like classical logic, it really needs these pictures. That's kind of weird in itself, but hard to explain in a short post.

Comment author: sen 01 July 2017 07:48:15AM 0 points [-]

I guess today I'm learning about Heyting algebras too.

I don't think that circle method works. "Not Not A" isn't necessarily the same thing as "A" in a Heyting algebra, though your method suggests that they are the same. You can try to fix this by adding or removing the circle borders through negation operations, but even that yields inconsistent results. For example, if you add the border on each negation, "A or Not A" yields 1 under your method, though it should not in a Heyting algebra. If you remove the border on each negation "A is a subset of Not Not A" is false under your method, though it should yield true.

I think it's easier to think of Heyting algebra in terms of functions and arguments. "A implies B" is a function that takes an argument of type A and produces an argument of type B. 0 is null. "A and B" is the set of arguments a,b where a is of type A and b is of type B. If null is in the argument list, then the whole argument list becomes null. "Not A" is a function that takes an argument of type A and produces 0. "Not Not A" can be thought of in two ways: (1) it takes an argument of type Not A and produces 0, or (2) it takes an argument of type [a function that takes an argument of type A and produces 0] and produces 0.

If "(A and B and C and ...) -> 0" then "A -> (B -> (C -> ... -> 0))". If you've worked with programming languages where lambda functions are common, it's like taking a function of 2 arguments and turning it into a function of 1 argument by fixing one of the arguments.

I don't see it on the Wikipedia page, but I'd guess that "A or B" means "(Not B implies A) and (Not A implies B)".

If you don't already, I highly recommend studying category theory. Most abstract mathematical concepts have simple definitions in category theory. The category theoretic definition of Heyting algebras on Wikipedia consists of 6 lines, and it's enough to understand all of the above except the Or relation.

View more: Next