Comment author: morganism 08 August 2016 10:17:58PM 3 points [-]

AMA: We are the Google Brain team. We'd love to answer your questions about machine learning.

https://www.reddit.com/r/MachineLearning/comments/4w6tsv/ama_we_are_the_google_brain_team_wed_love_to/

i missed this, even while reading askscience then...

Comment author: Gyrodiot 09 August 2016 01:58:18PM 1 point [-]

As of today, the thread is still gathering questions. The team will start answering them August 11; a LW post may be of interest then.

Comment author: Gyrodiot 02 August 2016 10:00:57AM 0 points [-]

Taken in isolation, part 1 left me confused. Part 2 greatly improves the value of the transcript.

Zebra is dealing with a set of problems, which may or not stem from a single Problem. I saw your questions as an effort, not only to clarify the issues, but also determine the structure of the problem set. Here, you describe the problem spiral, where solving one issue raises further issues because you keep thinking of the whole set instead of taking each issue in isolation.

Note there are two seemingly conflicting strategies here. One is to solve part of the problem, focusing on it for a given time, trying to jumpstart a success spiral. But how would you differentiate this from bikeshedding? How can you be sure you're not focusing on irrelevant things?

On the other hand, carefully thinking about the Problem, and how to solve it all at once, or by the correct sequence of actions, may lead to an efficient strategy. However, taking on the Problem as a whole shatters all motivation, warps the view of the problem set and makes small tasks seems undoable.

You linked Zebra, in the transcript, to Nate Soares's Replacing Guilt series. I would pinpoint my advice with Moving towards the goal. Solving the Problem (the cluster of problems, taken as a single eldritch entity) has to be set aside. You can picture the goal state, but the planning fallacy goes both ways: you can overestimate the difficulty of the solution. What matters is to make progress.

(I may have paraprashed your post here - I wanted to write down my own understanding of it)

Comment author: Arielgenesis 27 July 2016 04:14:00AM 2 points [-]

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?

Instrumental rationality

Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?

Comment author: Gyrodiot 27 July 2016 06:52:52PM 1 point [-]

Hi Arielgenesis, and welcome!

From a rationalist perspective, taking things for granted is both dangerous and extremely useful. We want to preserve our ability to change our minds about things in the right direction (closer to truth) whenever the opportunity arises. That being said, we cannot afford to doubt everything, as updating our beliefs takes time and resources.

So there are things we take for granted. Most mathematics, physics, the basic laws and phenomena of Science in general. Those are ideally backed by the scientific method, which axioms are grounded in building a useful model of the world (see Making Beliefs Pay Rent (in Anticipated Experiences) ).

From my rationalist perspective, then, there are no self-evident things, but there are obvious things, considered evident by the overwhelming weight of available... evidence.

Regarding values... it's a tough problem. I personally find that all preconceptions I had about universally shared values are shattered one by one the more I study them. For more information on this, I shall redirect you to complexity of value.

Comment author: Gyrodiot 05 April 2016 01:17:36PM *  1 point [-]

(meta: I'm not sure if I should make a Discussion post for this, so I'm posting here. Feedback most welcome)

I'm exploring the following hypothesis : sometimes, you have to give up constructive actions for the sake of focus.

Most productivity methods suggest the obvious, to keep wasteful activities in check. It could be gaming, chatting, checking news websites. They all share a common trait: you don't gain any significant utility (nor money, nor fun, nor rest) for spending more time on it. You achieve the same result by spending a little time on it, rather than a full day.

With productive activites, time spent and value created aren't proportional. Sometimes you're lacking energy, inspiration, and it's okay: you don't have to work yourself ragged.

If you have multiple tasks to be achieved in parallel, you should treat them as sequential anyway. Focusing on one task at a time yields better results than task switching all the time.

Problems arise when you find inspiration, or a sudden peak of interest for a certain task which is, useful in isolation, but which doesn't fit in your schedule. Maybe a discussion with a friend sparked the idea of a story to write. Maybe you're considering moving some furniture because you're well-rested and full of energy.

Even if you could be maximally productive for a given useful task, you should treat it as a wasteful activity if you have something else you planned to do. If the idea sounds good, write it down. If it's really good, hype will come back another time. If you're energetic, do the most physical thing you had planned to do. Energy will come back another time.

The goal is not to add another task on your current schedule and mess with the plan you've set for the day, like you'd do with "classical" wasteful activities. You can convince yourself easily that news websites can wait another day. The unintuitive part is this also holds with most productive activities, even though you're training yourself to not defer work!

Comment author: pragmatist 31 March 2016 11:24:12AM *  0 points [-]

When you update, you're not simply imagining what you would believe in a world where E was true, you're changing your actual beliefs about this world. The point of updates is to change your behavior in response to evidence. I'm not going to change my behavior in this world simply because I'm imagining what I would believe in a hypothetical world where E is definitely true. I'm going to change my behavior because observation has led me to change the credence I attach to E being true in this world.

Comment author: Gyrodiot 01 April 2016 02:41:18PM *  0 points [-]

There's a labeling problem here. E is an event. The extra information you're updating on, the evidence, the thing that you are certain of, is not "E is true". It's "E has probability p". You can't actually update until you know the probability of E.

What the joint probability give you is by how much you have to update your credence in H, given E. Without P(E), you can't actually update.

P(H|E) tells you "OK, if E is certain, my new probability for H is P(H|E)". P(H|~E) tells you "OK, if E is impossible, my new probability for H is P(H|~E)". In the case of P(E) = 0.5, I will update by taking the mean of both.

Updating, proper updating, will only happen when you are certain of the probability of E (this is different form "being certain of E"), and the formulas will tell you by how much. Your joint probabilities are information themselves: they tell you how E relates to H. But you can't update on H until you know evidence about E.

Comment author: Gyrodiot 30 March 2016 01:05:04PM 0 points [-]

Consider P(E) = 1/3. We can consider three worlds, W1, W2 and W3, all with the same probability, with E being true in W3 only. Placing yourself in W3, you can evaluate the probability of H while updating P(E) = 1 (because you're placing yourself in the world where E is true with certainty.

In the same way, by placing yourself in W1 and W2, you evaluate H with P(E) = 0.

The thing is, you're "updating" on an hypothetical fact. You're not certain of being in W1, W2, or W3. So you're not actually updating, you're artificially considering a world where the probabilities are shifted to 0 or 1, and weighting the outcomes by the probabilities of that world happening.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Gyrodiot 26 March 2016 07:16:28AM 42 points [-]

I have taken the survey. Yesterday.

In response to On Making Things
Comment author: Gyrodiot 05 March 2016 09:41:36AM 2 points [-]

I really enjoyed this post. The process of making things, and the motivation behind interests me. Also, congrats on making this whiteboard!

Did you, at any point, thought "is this really worth my time"? From your description, I suppose the fun justifies the whole thing, and the fact that you made a usable thing adds to the value. I'm often overthinking fun and how I spend my time, so I wonder how to mitigate these feelings of "am I doing the right choice doing that at all?"

In response to comment by ciphergoth on Where are we?
Comment author: Emile 03 April 2009 05:59:33AM 2 points [-]

Post in this thread if you live in France.

In response to comment by Emile on Where are we?
Comment author: Gyrodiot 09 February 2016 01:59:43PM 0 points [-]

Greetings, from Toulouse.

Comment author: entirelyuseless 08 February 2016 04:07:55PM *  9 points [-]

The burning is the unsatisfied desire for sex, and lifting the branch is offering sex. At the end of the story, the boy goes to prison for attempted rape. I presume you were joking in saying that you did not recognize this, or that you simply intended to say that you consider it a bad analogy.

In any case, I agree that such an analogy is pointless, and that is why I downvoted the post.

Comment author: Gyrodiot 09 February 2016 10:03:46AM 3 points [-]

Thanks ! I wasn't joking. Now that I read the whole thing once again, the metaphor should have been perfectly obvious, but I guess I wasn't in the right state of mind yesterday.

Well, now I understand, I wish there hadn't be any metaphor. Here it conceals the point rather than offering a new perspective on it.

View more: Next