Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MrMind 24 March 2017 09:24:34AM 1 point [-]

Cool things I learned from this article:

  • the term "capta" as opposed to "data"

  • reframing "scientism" as "the cargo cult of science" (which I now discovered it linked back to LessWrong... Alas, it only clicked for me now)

Less than cool things:

  • torekp counter-argument seems decisive to me

  • the post-rationalsts humus, so to speak.

Comment author: gworley 24 March 2017 06:36:46PM 0 points [-]

Maybe there's another way to talk about the things that we consider "post" rationality? I certainly didn't start out to adopt much the same language as post-modernists and living philosophers have, but they seem to be the only folks who have thought much about these issues before and so are the best source I know of for standard, shared terminology I can use. The alternatives would largely be to adopt Sanskrit words used in Indian philosophy or just make stuff up, the latter of which would lead to the same problems that already face rationalist discourse in that it has a lot of non-standard jargon that there are more standard terms for.

Comment author: torekp 23 March 2017 10:17:33PM 2 points [-]

The post persuasively displays some of the value of hermeneutics for philosophy and knowledge in general. Where I part ways is with the declaration that epistemology precedes metaphysics. We know far more about the world than we do about our senses. Our minds are largely outward-directed by default. What you know far exceeds what you know that you know, and what you know how you know is smaller still. The prospects for reversing cart and horse are dim to nonexistent.

Comment author: gworley 24 March 2017 06:29:39PM 1 point [-]

This reads to me like you're confusing the differences between epistemology and experience and metaphysics and reality. The formers are studies of the latters. I agree that reality exists first and then experience is something that happens inside reality: this is specifically the existentialist view of reality that stuff exists before it has meaning and is contrasted with the essentialist view that meaning causes stuff to exist.

The point that epistemology precedes metaphysics is that, because you exist inside reality and know it only through experience inside of it, understanding how you know must come before understanding what you know. To be concrete, I know that 1 + 1 = 2, but I learned this information by experiencing that combining one thing and another thing gave me two things. There seems little to no evidence to support the opposite view, that I had timeless access to the knowledge of the true proposition that 1 + 1 = 2 and then was able to experience putting one thing and another together to get two things because I knew it to be true.

That we are perhaps better at metaphysics than epistemology seems beside the point that knowledge comes to us through experience.

LW UI issue

14 gworley 24 March 2017 06:08PM

Not really sure where else I might post this, but there seems to be a UI issue on the site. When I hit the homepage of lesswrong.com while logged in I no longer see the user sidebar or the header links for Main and Discussion. This is kind of annoying because I have to click into an article first to get to a page where I can access those things. Would be nice to have them back on the front page.

[Link] What Value Hermeneutics?

0 gworley 21 March 2017 08:03PM
Comment author: chaosmage 16 March 2017 09:14:34PM 4 points [-]

That makes sense. But it isn't what Eliezer says in that talk:

There’s a whole set of different ways we could look at agents, but as long as the agents are sufficiently advanced that we have pumped most of the qualitatively bad behavior out of them, they will behave as if they have coherent probability distributions and consistent utility functions.

Do you disagree with him on that?

Comment author: gworley 20 March 2017 02:19:54AM 0 points [-]

Basically agree, and it's nearly the same point I was trying to get at, though by less supposing utility functions are definitely the right thing. I'd leave open more possibility that we're wrong about utility functions always being the best subclass of preference relations, but even if we're wrong about that our solutions must at least work for utility functions, they being a smaller set of all possible ways something could decide.

Comment author: gworley 16 March 2017 07:45:58PM 2 points [-]

I had the same thoughts after listening to the same talk. I think the advantage of utility functions, though, is that they are well-defined mathematical constructs we can reason about and showcase the corner cases that may pop up in other models but would also be easier to miss. AGI, just like all existing intelligences, may not be implemented with a utility function, but the utility function provides a powerful abstraction for reasoning about what we might call more loosely its "preference relation" that, by admitted contradictions, may risk us missing cases where the contradictions do not exist and the preference relation becomes a utility function.

The point being, for the purpose of alignment, studying utility functions makes more sense because your control method can't possibly work on a preference relation if it can't even work on the simpler utility function. That real preference relations contain things that prevent the challenges of aligning utility functions in existing intelligences instead provides evidence of how the problem might be solved (at least for some bounded cases).

Comment author: lifelonglearner 04 March 2017 01:21:00AM 0 points [-]

Hm, I think the thing I'm trying to point at is my general feeling intuition about how people tend to react (it's just an internal model here, so feel free to counter w/ better-informed info) which says something like if you want to have people to even get to the point where they can look at written essays on rationality and think to themselves, "Wait this could apply to me!", you need some sort of baseline of rationality to catalyze the whole thing.

My claim is that getting people to this sort of optimizing step whereupon everything else can work requires something different than what conventional wisdom might dictate (e.g. writing things and/or giving people general advice and telling them to go with it).

Something like "personally interacting with promising individuals and send off social signals that you know cool stuff and pique their interest; then, slowly get them to want to care and get them started off on their journey via slow tidbits / cultivating their interest" seems to be something that I claim is more effective than just finding someone and saying, "Hey! Read this; it'll shatter your worldview!"

Comment author: gworley 06 March 2017 03:53:20AM 0 points [-]

I agree: interactively working in person is more effective.

Comment author: lifelonglearner 03 March 2017 10:55:55PM 0 points [-]

Thanks for giving more information about your theory.

I'd like to express skepticism that people would be able to level up after reading such a treatise. Maybe my mental models of past me and other people aren't very good, but my impression is that giving this sort of stuff to people is not an ideal way to boost people's mental models.

As in, the act of explicating the whole phenomenon behind breaking down what we mean by concepts, systems, etc. seems very different from the act of giving people the tools or leading them to the point where they can start to update their mental models and explore different systems. (That is to say, the act of writing about Leveling Up seems very different than the things you'd need to do to help someone Level Up.)

Happy to extend this if it turns out that we've got differing ideas on the matter.

Comment author: gworley 04 March 2017 12:21:27AM 0 points [-]

Sure, my program for helping people achieve more phenomenological complexity is not to point them at this. It's instead to follow the advice I've previously outlined: act into fear and abandon all hope. That can be hard to apply, though, so folks often need to be specifically induced to face particular fears and abandon particular hopes. Once they've done this and experienced worldview disintegration they can reintegrate with whatever they like (at least until they have to disintegrate again to make progress) and basically any choice seems fine there since repeated disintegration eventually forces convergence by needing to accept all of reality into the worldview.

So basically my advice is keep breaking down your assumptions and rebuilding them until you have none left. Then you will be enlightened.

Comment author: lifelonglearner 03 March 2017 06:34:28PM *  1 point [-]

Thanks for writing this up; I enjoy reading attempts to categorize experience, and this one seems plausibly useful as a thing to point to in conversations when jumping between levels.

My attempt at a summary if no one else wants to wade through the dense wall of text (feel free to correct me, gworley):

[gworley draws from sources in phenomenology (the study of experience), systems modeling, meta-levels, and Kegan to try and create some useful distinctions to describe how people study experience.

If you're already familiar with the 5 stages of Kegan's stuff, you'll see lots of parallels.

gworley starts with an introduction to phenomenology, which is summed up by a {subject, experience, object} tuple, where a subject experiences an object. In a similar way to Drescher's Cartesian Camcorder (if anyone's familiar w/ Good and Real), he claims that it is the experiencing of our experiences that metaphorically maps onto the feeling we typically ascribe as consciousness.

From there, he uses a more formal approach to rebuild Kegan's stages. There is the conscious experience of "things" (like "chairs" and other things our ontology classifies as basic), followed by the conscious experience of "systems", which can be more abstract. This is followed by a system-relationship worldview, which looks at different competing ontologies.

gworley ends by pointing to the idea of a "Holon", which seems similar to Hofstadter's idea of a strange loop. He posits a worldview where there's a sort of meta-framework for connecting with different system relationships. Also, there's an attempt to try and ground all this in a similar way to complexity classes in computing, but that part seemed sketchy.]

My thoughts: As with most analyses of this kind, it's frustrating to see that they don't immediately point to things we can do differently, as such theories are fairly far removed, compared to, say, new debiasing research. However, I think that this kind of thinking can be good, especially as ideas in rationality like Actually Trying do sort of skirt the idea of breaking away from assumed social expectations. So I do think there's value in directly explicating these things, if only for the benefit of more clearly building our internal worldviews.

But it does seem to me that the sort of people who tend to coherently understand this stuff are already past the level these sorts of models refer to, which makes them seem less useful as a way to "level up" people, or as a thing to give to aspiring friends.

Comment author: gworley 03 March 2017 07:32:25PM 1 point [-]

Thanks, I appreciate the summary!

Apologies that the complexity stuff seems a bit sketchy. I had to cut the strongest the formalisms because it was very slow going developing them and will probably take me months. I have a sketch of a mathematical approach to phenomenology as topology over state space but need time to fully develop it so I can flesh out a rigorous explanation of what I mean by complexity here.

You're right that this is not especially applicable in this form. It's like describing topology when what you really need to do is integrate a function over real numbers. But given my hermeneutical preferences I think just trying to understand it is likely to lead to some "leveling up", as you put it.

[Link] Phenomenological Complexity Classes

1 gworley 02 March 2017 07:47PM

View more: Next