Comment author: Caspian 24 July 2013 11:18:31PM 1 point [-]

Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns. On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

You need to include inputs other than labour, and I think conventional economics allows for doing that.

Then the people who are less efficient than machines at converting the other inputs into products may become unemployed, if the machines are cheap enough.

Comment author: wedrifid 20 July 2013 05:30:09AM 0 points [-]

Interestingly, the wiki turns it around and says that epistemic rationality is a special case of instrumental rationality.

I wonder who wrote the wiki page. The claim is controversial. I'd say that the article would be better without it.

Comment author: Caspian 20 July 2013 11:07:01PM 0 points [-]

That part of the wiki page was written in this edit

Comment author: Stuart_Armstrong 20 July 2013 07:00:58PM *  3 points [-]

Why? If you can end up on either end of a long line segment, then you have a chance of winning a lot or losing a lot. But you shouldn't be risk averse with your utility - risk aversion should already be included. So "towards the middle" is no better in expectation than "right end or left end".

Maybe you're thinking we shouldn't be maximising expected utility? I'm actually quite sympathetic to that view...

And with complex real world valuations (eg anything with a diminishing marginal utility), then any Pareto line segments are likely to be short.

Comment author: Caspian 20 July 2013 10:15:21PM 3 points [-]

Nonlinear utility functions (as a function of resources) do not accurately model human risk aversion. That could imply that we should either change our (or they/their) risk aversion or not be maximising expected utility.

Comment author: Eugine_Nier 19 July 2013 06:51:23AM -4 points [-]

So get welfare or whatever other related social program is available in your area.

Comment author: Caspian 19 July 2013 11:32:29AM 1 point [-]

That's not intended for people who could work but chose not to. They require you to regularly apply for employment. The applications themselves can be stressful and difficult work if you don't like self-promotion.

Comment author: Vladimir_Golovin 21 June 2012 12:05:24PM 5 points [-]

Perhaps this is why I like Autofocus better than GTD. "It is fine to have incomplete tasks in your task list".

Also, non-punishment for failures may be one of the distinctions between play-like work and work-like work.

Comment author: Caspian 18 July 2013 01:35:29PM 1 point [-]

I think I even have work-like play where a game stops being fun. And yes, play-like work is what I want to achieve.

In response to comment by [deleted] on The Power of Reinforcement
Comment author: pnrjulius 05 July 2012 01:27:58AM 1 point [-]

If that's the case (and it seems like it is), then reinforcing yourself is going to be almost impossible, because you will by definition know the reinforcement script.

Comment author: Caspian 18 July 2013 01:18:12PM 0 points [-]

Reinforcing effort only in combination with poor performance wasn't the intent. Pick a better criterion that you can reinforce with honest self-praise. You do need to start off with low enough standards so you can reward improvement from your initial level though.

Comment author: Cyan 16 August 2010 03:54:13AM *  4 points [-]

Applied intermittent reinforcement results (1 month trial):

Household chores: The only time in the past month that I failed to tidy the kitchen before going to bed was two days ago, when I had a fever of 102 deg F. All other chores have 100% success rate.

Get to bed earlier: 0% success rate, alas.

Comment author: Caspian 18 July 2013 01:08:11PM 0 points [-]

I'm interested in what you rewarded for going to bed earlier (or given the 0% success rate, what you planned to reward if it ever happened) and how/when you rewarded it. Maybe rewarding subtasks would have helped.

Comment author: Caspian 18 July 2013 01:04:50PM 1 point [-]

I just read Don't Shoot The Dog, and one of the interesting bits was that it seemed like getting trained the way it described was fun for the animals, like a good game. Also as the skill was learnt the task difficulty level was raised so it wasn't too easy. And the rewards seemed somewhat symbolic - a clicker, and being fed with food that wasn't officially restricted outside the training sessions.

Thinking about applying it to myself, having the reward not be too important outside the game/practise means I'm not likely to want to bypass the game to get the reward directly. Having the system be fun means it's improving my quality of life in that way in addition to any behaviour change.

I haven't done much about ramping up the challenge. How does one make doing the dishes more challenging?

But I did make sure to make the rewards quicker/more frequent by rewarding subtasks.

Comment author: SaidAchmiz 15 July 2013 04:15:09PM *  5 points [-]

Well, it seems we have a conflict of interests. Do you agree?

If you do, do you think that it is fair to resolve it unilaterally in one direction? If you do not, what should be the compromise?

To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways and circumstances such as described by pragmatist), because they prefer that people not introduce themselves to them personally.

Other people (extraverts? NTs? something else?) would prefer people-in-general to adopt a policy of introducing oneself to strangers, because they prefer that people introduce themselves to them personally.

Does this seem like a fair characterization of the situation?

If so, then certain solutions present themselves, some better than others.

We could agree that everyone should adopt one of the above policies. In such a case, those people who prefer the other policy would be harmed. (Make no mistake: harmed. It does no good to say that either side should "just deal with it". I recognize this to be true for those people who have preferences opposite to my own, as well as for myself.)

The alternative, by construction, would be some sort of compromise (a mixed policy? one with more nuance, or one sensitive to case-specific information? But it's not obvious to me what such a policy would look like), or a solution that obviated the conflict in the first place.

Your thoughts?

Comment author: Caspian 17 July 2013 03:41:22PM 0 points [-]

Well, it seems we have a conflict of interests. Do you agree?

Yes. We also have interests in common, but yes.

If you do, do you think that it is fair to resolve it unilaterally in one direction?

Better to resolve it after considering inputs from all parties. Beyond that it depends on specifics of the resolution.

If you do not, what should be the compromise?

To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways and circumstances such as described by pragmatist), because they prefer that people not introduce themselves to them personally.

Several of the objections to the introduction suggest guidelines I would agree with: keep the introduction brief until the other person has had a chance to respond. Do not signal unwillingness to drop the conversation. Signaling the opposite may be advisable.

Other people (extraverts? NTs? something else?) would prefer people-in-general to adopt a policy of introducing oneself to strangers, because they prefer that people introduce themselves to them personally.

Yeah. Not that I always want to talk to someone, but sometimes I do.

Does this seem like a fair characterization of the situation?

Yes.

If so, then certain solutions present themselves, some better than others. We could agree that everyone should adopt one of the above policies. In such a case, those people who prefer the other policy would be harmed. (Make no mistake: harmed. It does no good to say that either side should "just deal with it". I recognize this to be true for those people who have preferences opposite to my own, as well as for myself.)

I think people sometimes conflate "it is okay for me to do this" with "this does no harm" and "this does no harm that I am morally responsible for" and "this only does harm that someone else is morally responsible for, e.g. the victim"

The alternative, by construction, would be some sort of compromise (a mixed policy? one with more nuance, or one sensitive to case-specific information? But it's not obvious to me what such a policy would look like), or a solution that obviated the conflict in the first place. Your thoughts?

Working out such a policy could be a useful exercise. Some relevant information would be: when are introductions more or less bad, for those who prefer to avoid them.

Comment author: [deleted] 14 July 2013 11:25:21PM *  10 points [-]

My suggestion: say “Hi” while looking at them; only introduce yourself to them if they say “Hi” back while looking back at you, and with an enthusiastic-sounding tone of voice.

(Myself, I go by Postel's Law here: I don't initiate conversations with strangers on a plane, but don't freak out when they initiate conversations with me either.)

In response to comment by [deleted] on "Stupid" questions thread
Comment author: Caspian 15 July 2013 03:55:11PM 1 point [-]

I think sitting really close beside someone I would be less likely to want to face them - it would feel too intimate.

View more: Prev | Next