Comment author: Gram_Stone 12 August 2015 10:16:15PM 3 points [-]

Has anyone managed not to Bottom Line in their everyday thinking? I find that it's very difficult. It's so natural and it's a shortcut that I find useful more often than harmful. I wonder if it's best to flag issues where epistemic irrationality would be very bad and primarily focus on avoiding Bottom Lining at times like that. I feel that the things I'm talking about are in a different spirit than those originally intended by the article, where you're not so much emotionally invested in the world being a certain way as you are, say, relying on your intuition as the primary source of evidence for the sake of saving time and avoiding false starts.

Comment author: Gust 13 August 2015 02:43:14PM 2 points [-]

The way I see it, having intuitions and trusting them is not necessarily harmful. But you should actually recognize them by what they are: snap judgements made by subconscious heuristics that have little to do with actual arguments you come up with. That way, you can take it as a kind of evidence/argument, instead of a Bottom Line - like an opinion from a supposed expert which tells you the "X is Y", but doesn't have the time to explain. You can then ask: "is this guy really an expert?" and "do other arguments/evidence outweight the expert's opinion?"

Comment author: lukeprog 22 December 2013 11:44:12PM *  31 points [-]

A note on how this post was produced:

Eliezer brain-dumped his thoughts on this open problem to Facebook, and replied to questions there for several hours. Then Robby spent time figuring out how to structure a series of posts that would more clearly explain the open problem, and wrote drafts of those posts. Several people, including Eliezer, commented heavily on various drafts until they reached a publishable form. Louie coordinates the project.

After discussion of the posts on Less Wrong, we may in some cases get someone to write up journal article expositions of some of the ideas in the posts.

The aim is to write up open problems in Friendly AI using as little Eliezer-time as possible. It seems to be working so far.

Comment author: Gust 07 August 2015 04:02:38AM 0 points [-]

I'm sad the original FB posts were deleted. Now I can never show my kids the occasion where Eliezer endorsed a comment of mine =(

Comment author: Gust 07 August 2015 04:01:23AM 0 points [-]

Brain dump of a quick idea:

A sufficiently complex bridge law might say that the agent is actually a rock which, through some bizarre arbitrary encoding, encodes a computation[1]. Meanwhile the actual agent is somewhere else. Hopefully the agent has some adequate Occamian prior and he never assigns this hypothesis any relevance because of the high complexity of the encoding code.

In idea-space, though, there is a computation which is encoded by a rock using a complex arbitrary encoding, which, by virtue of having a weird prior, concludes that it actually is that rock, and to whom them breaking of the rock would mean death. We usually regard this computation as irrelevant for moral purposes - only computations corresponding to "natural" interpretations of physical phenomena count. But this distinction between natural and arbitrary criterions of interpretation seems, well, arbitrary.

We regard a person's body as "executing" the computation that is a person that thinks she is in that body. But we do not regard the rock as actually "executing" the computation that is a weird agent that thinks it's the computation encoded by the rock (through some arbitrary encoding).

Why?

The pragmatic reason is obvious: you can't do anything to help the rock-computation in anyway, and whatever you do, you'd be lowering utility for some other ideal computation.

But maybe the kind of reasoning the rock-computation has to make to keep seeing itself as the rock is relevant.

A rock-as-computation hypothesis (I am this rock in this universe = my phenomenological experiences correspond to the states os atoms in several points of this rock as translated by this [insert very long bridge law, full of specifics] bridge law) is doomed to fail at the few next steps in the computation. Because the bridge law is so ad hoc, it won't correctly predict the next phenomena perceived by the computation (in the ideal or real world where it actually executes). So if the rock-computation does induction at all, it will have to change the bridge law, and give up on being that rock.

In other words, if we built a robot with a prior that privileges the bridge law hypothesis that it's a computation encoded in a rcok though some bizarre encoding, it would have to abandon that hypothesis very very soon. And, as phenomelogical data came in, it would approach the correct hypothesis that it's the robot. Unless it's doing some kind of privileged-hypothesis anti-induction where it keeps adopting increasingly complex bridge laws to keep believing it is that one rock.

So, proposal: a substrate is regarded to embody a computation-agent if that computation-agent is one that, by doing induction in the same sense we do to find out about ourselves, will eventually arrive at the correct bridge law that it is being executed in said substrate.

--

[1] Rock example is from DRESCHER, Gary, Good and Real, 2006, p. 55.

Comment author: Gust 31 July 2015 06:28:39PM 2 points [-]

The ULH suggests that most everything that defines the human mind is cognitive software rather than hardware: the adult mind (in terms of algorithmic information) is 99.999% a cultural/memetic construct.

I think a distinction worth tracing here is the diferrence between "learning" in the neural-net-sense and "learning" in the human pedagogical/psychological sense.

The "learning" done by a piece of cortex becoming a visual cortex after receiving neural impulses from the eye isn't something you can override by teaching a person (in the usual sense o the word "teaching") - you'd need to rewire their brain. I don't think you can call it cultural/memetic because this neural learning does not (seem to) occur through the mechanism(s) that deals with concepts, ideas and feelings, which is involved in learning a language or a social custom or a scientific theory.

In the same way, maybe the availability heuristic isn't genetically coded, but is learned through the type of data certain parts of the brain have to work with. That would mean you could fix it through some input rewiring during gestation, but doesn't mean you can change it through a new human education system - it may be too low level, like a generic part of the cortex becoming the visual cortex. If that's the case, I wouldn't say it's a cultural/memetic construct (although it is an environmental construct).

Comment author: TheAncientGeek 16 July 2015 07:24:44PM -1 points [-]

I meant not hardcoding values or ethics.

Comment author: Gust 16 July 2015 07:50:00PM 0 points [-]

Well, you'd have to hardcode at least a learning algorithm for values if you expect to have any real chance that the AI behaves like a useful agent, and that falls within the category of important functionalities. But then I guess you'll agree with that.

Comment author: TheAncientGeek 12 May 2015 01:48:39PM 0 points [-]

The lesson is: do not hard-code important functionality into your AGI without proving it correct. 

That's two lessons. Not hardcoding iat all is under explored round here.

Comment author: Gust 16 July 2015 07:21:28PM 0 points [-]

You have to hardcode something, don't you?

Comment author: Caue 10 June 2015 11:50:42PM 3 points [-]

At last. Wouldn't miss it.

Comment author: Gust 26 June 2015 09:40:09PM 0 points [-]

You're a Brazilian studying Law who's been around LW since 2013 and I'd never heard of you? Wow. Please show up!

Meetup : São Paulo, Brazil - Meetup at Base Sociedade Colaborativa

3 Gust 09 June 2015 01:47AM

Discussion article for the meetup : São Paulo, Brazil - Meetup at Base Sociedade Colaborativa

WHEN: 27 June 2015 01:00:00PM (-0300)

WHERE: Rua Maestro Elias Lobo, 923, São Paulo, Brazil

Let's have a meetup a talk a bit about rationality and rationalism in Brazil!

This is not a LessWrong only event (afaik, there are not many LWers around here), so any news and discussions about the meetup will happen primarily in the facebook event:

https://www.facebook.com/events/815839175152653/815899785146592/

Please join us there!

Proposed activities are: - Short talks (~20min) about organizations and projects to spread rationality raise the sanity waterline; - Discussion tables about interesting stuff

Again, please join the event on Facebook!

Discussion article for the meetup : São Paulo, Brazil - Meetup at Base Sociedade Colaborativa

In response to comment by [deleted] on Graphical Assumption Modeling
Comment author: ozziegooen 08 January 2015 05:00:01AM 1 point [-]

I actually watched his TED talk last night. Will look more into his stuff.

The main issues I'm facing are understanding the math behind combining estimates and actually making the program right now. However, he definitely seems to be one of the top world experts on actually making these kinds of models.

Comment author: Gust 23 April 2015 04:18:41AM 0 points [-]

If you keep the project open source, I might be able help with the programming (although I don't know much about Rails, I could help with the client side). The math is a mystery to me, too, but can't you charge ahead with a simple geometric mean for the combination of estimates while you figure it out?

Comment author: hydkyll 12 April 2015 03:09:54PM 1 point [-]

How is that translation coming along? I could help with German.

Comment author: Gust 13 April 2015 02:44:16PM *  0 points [-]

We're translating to Brazilian Protuguese only, since that's our native language.

View more: Prev | Next