Comment author: Vaniver 25 December 2014 03:54:34PM 4 points [-]

Behind every Steve Jobs are thousands of very intelligent and hard-working employees and millions of smart people who have created a larger ecosystem. If one only pays attention to Steve Jobs they will leave out most of the work. They will praise Steve Jobs far too highly and disregard the importance of unglamorous labor.

I think that Steve Jobs is a bad example here, since his specific genius is not in designing things himself but in wringing as much productive work as possible out of intelligent and hard-working employees doing unglamorous labor. (Consider Edison, whose primary invention was the modern R&D lab, vs. Tesla, who was a good inventor but terrible businessman or manager.)

Comment author: ozziegooen 25 December 2014 06:21:51PM 0 points [-]

I used Steve Jobs because he's about the most popular person in the Valley now, and I used him in the beginning of the essay.

Edison's R&D lab itself relied on lots of other skilled engineers (Tesla included at one time).

Tesla, out of all the engineers I know, does stand out as someone who did work solo. Even he though needed Westinghouse to manufacture and sell much of his work, and many funders to fund it all. Plus, I think in some ways Tesla may be a mediocre role model given how supremely intelligent he was (it seemed like more than the other two). This has meant that I personally have found it difficult to emulate him.

Comment author: ozziegooen 25 December 2014 05:56:01AM *  6 points [-]

I was one of the people who expressed opinion against the LW content. In general I liked the event, but found those parts off-putting. I'm really surprised that people new to it seemed so oblivious.

Perhaps one reason why people who were familiar with that content were hesitant about showing it to others, was that they were afraid it would reflect poorly on them. If I brought a bunch of 'regular' friends to a 'transhumanist' meetup that I told them I was somewhat involved in, I would be really be afraid of them getting a poor impression of transhumanism.

It's kind of like taking your significant other to meet your parents. You're significant other may not mind your parents quirks (or vice versa), but you notice every one and horrified for them.

Another thing that comes to mind is that they some of the 'serious' talk was controversial even among this crowd. Personally I really don't believe that humans should live forever, for example. Here the people who care the most about it would also care the most about discrepancies. For instance, a very devout Catholic would be the first to get angered by what they feel to be a wrong or mistaken representation of Catholicism at what seems like a very sacred event.

Overall though, thanks for getting feedback and writing this all up! I'm really interested in how it progresses.

Comment author: JonahSinick 05 April 2014 11:29:02PM *  1 point [-]

I studied engineering, but looking back Computer Science seems like it would have been a lot better.

I'd be very interested in hearing more – you're just the sort of person who I was hoping would comment (graduate from a top program several years out who switched fields and so has had exposure to both). In what respects would majoring in computer science have been better for you?

Comment author: ozziegooen 06 April 2014 12:17:14AM *  8 points [-]
  1. Computer science definitely seems better for making companies / entrepreneurship potential.

  2. In my experience, engineering jobs are far more segmented. You can be awesome at making microprocessors, but then only a few companies may be able to hire you. In other fields in similar; there are lots of interesting areas within engineering, but within each, it seems like there are only a very few specific companies, especially within a given geographic area.

  3. For whatever reason, a lot of engineering companies just don't seem that great (I think it's the lack of competition). Tesla and Space X (two of the top companies engineering friends would find jobs at) are much worse to work at than one may expect (see the Glassdoor ratings). Where you can find one, hope you keep the job (it often seems like you become specialized, and there just aren't many other great companies in the space. An example is Intel).

  4. I think that computer science jobs are more flexible than engineering jobs. I'm a bit more afraid of engineering jobs getting automated than computer science jobs (if you're ok learning a lot of new languages).

  5. More startups in computer science, if you're into that.

  6. The fact that engineering is way harder in college (at least my college) is an important factor. I really disliked much of my college experience because of the difficulty. Now a lot of the information doesn't seem applicable to my life, at all (I'll forget it quickly).

  7. It seems like with CS you get the bonus of understanding AI risk more, if you're into that.

I think that my (general engineering) degree definitely gives me a bit of a diverse background. I kind of have the option of going to a hardware/software startup, although I'm not sure I want to go in that direction with my career (it seems to narrow your career without improving your expected earnings). I like to think that it may be useful if I want to go into venture capital or some more diverse or meta-level positions, but now I'm really not sure about that.

One huge benefit to engineering is that I feel more comfortable making cool stuff, like arduino hobby circuits or burning man floats if I wanted to. It does feel really cool. Doesn't help my career as much though.

(For reference, I graduated with a 3.0 at Harvey Mudd College in General Engineering, focussed a bit on electrical. I spent 1 year doing web entrepreneurship with a cofounder, then another year with 80,000 hours doing web development.)

Comment author: ozziegooen 05 April 2014 11:18:16PM 3 points [-]

I want to note that engineering degrees can be more work than computer science degrees. This definitely true at Harvey Mudd College.

I studied engineering, but looking back Computer Science seems like it would have been a lot better. I've headed there since, but I definitely feel like I'm playing catch-up in comparison.

Comment author: ozziegooen 19 March 2014 01:46:50AM 0 points [-]

If all 'moral worth' meant was the consequences of what happened, I just wouldn't deem 'moral worth' to be that relevant towards judging. It would seem to me like we're just making 'moral worth' into something kind of irrelevant except from a completely pragmatic point.

Not sure if saying 'making the best decision you could is al you can do' is that much of a shortcut. I mean, I would imagine that a lot of smart people would realize that 'making the best decision you can' is still really, really difficult. If you act as your only judge (not just all of you, but only you at any given moment), then you may have less motivation; however, it would seem strange to me if 'fear of being judged' is the one thing that keeps us moral, even if it happens to become apparent that judging is technically impossible.

Comment author: ozziegooen 19 March 2014 01:50:03AM 0 points [-]

Also, keep in mind that in this case 'every decision you make is "good"', but 'good' is defined as everything, so it becomes a neutral term. In the future you can still learn stuff; you can say "I made the right decision at this time using what I knew, but then the results taught me some new information, and now I would know to choose differently next time".

Comment author: shokwave 18 March 2014 04:08:23PM *  1 point [-]

One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.

This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.

If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed - at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).

So we don't know if an action is good or not until it's in the past. But we can only choose future actions! What's a consequentialist to do? (Equivalently, since we don't know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.

The next step is to say that we want as many good things to happen as possible, so "expected value calculations" is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and "wishful thinking" is an incorrect way of making decisions.

So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.

The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person's decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.

--

I think it's very important for consequentialists to always remember that an action's moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.

If you let it collapse - if you take the shortcut and say "making the best decision you could is all you can do", then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes' theorem needs to go out the window.

Comment author: ozziegooen 19 March 2014 01:46:50AM 0 points [-]

If all 'moral worth' meant was the consequences of what happened, I just wouldn't deem 'moral worth' to be that relevant towards judging. It would seem to me like we're just making 'moral worth' into something kind of irrelevant except from a completely pragmatic point.

Not sure if saying 'making the best decision you could is al you can do' is that much of a shortcut. I mean, I would imagine that a lot of smart people would realize that 'making the best decision you can' is still really, really difficult. If you act as your only judge (not just all of you, but only you at any given moment), then you may have less motivation; however, it would seem strange to me if 'fear of being judged' is the one thing that keeps us moral, even if it happens to become apparent that judging is technically impossible.

Comment author: TheAncientGeek 17 March 2014 12:34:54PM *  1 point [-]

Yes. "Good" can mean desirable outcomes, or responsible decision making. The first obviously matches consequentialism. It appears not to be obvious to Lesswrongians that the second matches deontology. When we judge whether someone behaved culpably or not, we want to know whether they applied the rules and heuristic appropriate to their reference class (doctor, CEO, ships captain...). The consequences of their decision may have landed them in a tribunal, but we don't hold people to blame for applying the rules and getting the wrong results.

Comment author: ozziegooen 19 March 2014 01:34:19AM 0 points [-]

Perhaps I have misunderstood consequentialism and deontology, but my impression was that (many forms of) consequentialism prefers that people optimize expected utility, while deontology does not (it would consider other things, like 'not lying', as considerably more important). My impression was that this was basically the main differentiating factor.

Agree about the tribunal situation. From a consequentialist viewpoint it would seem like we would want to judge people formally (in tribunals) according to how well they made an expected value decision, rather than on the outcome. For one, because otherwise we would have a lot more court cases (anything causally linked to a crime is responsible)

Comment author: shminux 17 March 2014 01:23:49AM -1 points [-]

Optimizing Future Decisions: Actual vs. Expected Value

Not sure what you mean here. Future is never actual, only expected (or, more often, unexpected).

Comment author: ozziegooen 17 March 2014 01:37:31AM 0 points [-]

This just has to do with a question that was a poorly question to begin with. When one makes decisions, should they optimize for 'expected value' or 'actual value'. The answer is that the 'actual value' is obviously unknowable, so it's a moot question. That said, I've discussed this with people who weren't sure, so wanted to make this clear.

I call these "future decisions" to contrast them with 'past decisions' which can't really be made but judged, as they have already occurred.

Comment author: whales 16 March 2014 09:57:05PM 0 points [-]

Right, it seems kind of strange to declare that you're considering only states of the world in your decisions, but then to treat judgments of right and wrong as an deontological layer on top of that where you consider whether the consequentialist rule was followed correctly. But that does seem to be a mainstream version of consequentialism. As far as I can tell, it mostly leads to convoluted, confused-sounding arguments like the above and the linked talk by Neiladri Sinhababu, but maybe I'm missing something important.

Comment author: ozziegooen 17 March 2014 12:28:36AM 0 points [-]

I think it leads to very confusing and technical arguments if free will is assumed. If not, there's basically reason to morally judging others (other than the learning potential for future decisions).

I think the mainstream version of consequentialism, if I understand what you are saying correctly, can still be followed for personal decisions as they happen. Or, when making a decision, you personally do your best to optimize for the future. That seems quite reasonable to me, it's just really hard to understand and criticize from an outside perspective.

Comment author: Protagoras 16 March 2014 09:11:35PM 6 points [-]

You come to what is more or less the right consequentialist answer in the end, but it seems to me that your path is needlessly convoluted. Why are we judging past actions? Generally, the reason is to give us insight into and perhaps influence future decisions. So we don't judge the lottery purchase to have been good, because it wouldn't be a good idea to imitate it (we have no way to successfully imitate "buy a winning lottery ticket" behavior, and imitating "buy a lottery ticket" behavior has poor expected utility, and similarly for many broader or narrower classes of similar actions), and so we want to discourage people from imitating it, not encourage them. If we're being good consequentialists, what other means could it possibly be appropriate to use in deciding how to judge other than basing it on the consequences of judging in that way?

Comment author: ozziegooen 17 March 2014 12:08:23AM *  0 points [-]

your path is needlessly convoluted

Agreed. This really wasn't my best piece. I figured it would be better to publish it than not though. Was hoping it would turn out better. If the response is good I may rewrite it. However, I do feel like it is a complicated issue, so could require quite a bit of text to explain no matter how good the writing style.

Why are we judging past actions?

The first reason that comes to my mind is to say things like "X is a bad person", or "Y cheated on this test, which was bad", etc. If we are to evaluate them consequentially, I'm making the argument that seeing things from their point of view is exceedingly difficult. It's thus very difficult to ask if another person is acting in a 'utilitarian' way, especially if that person claims to be.

So we don't judge the lottery purchase to have been good,

In regard to the lottery purchase, the question is what does 'good' mean in the first place. I'm saying it is strongly coupled to a specific reference frame, and it's hard to make it an 'objective good' of any kind. However, it can be used to more clearly talk about specific kinds of 'good'. For instance, perhaps in this case if we used the 'reference frame' of our audience, we could explain the situation to them well, discouraging them (assuming a realistic audience).

If we're being good consequentialists, what other means could it possibly be appropriate to use in deciding how to judge other than basing it on the consequences of judging in that way?

I guess here the question is what it means to 'judge'. If 'judging' just means saying what happened (there was a person, he did this, this happened), then yes. If it is attempting to understand the decision making of the person in order to understand how 'morally good' that person is, or can be expected to be, those are different questions.

View more: Prev | Next