Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
dybypy20

In deep learning, Goodfellow writes in response to NFL

The philosophy of deep learning in general ... is that a wide range of tasks (such as all the intellectual tasks people can do) may all be solved effectively using very general-purpose forms of regularization

Where regularization is changes made to an algorithm to reduce generalization error but not training error, and are basically one type of assumption made about the task to be learned.

dybypy60

I've done something similar to this

I found my overconsumption of reddit and YouTube distasteful as I was spending a majority of my day on these sites mindless scrolling or refreshing.

I ended up cutting them out entirely for about two weeks and felt much better, working through a backlog of books instead. but have since relapsed

I like the idea of writing summaries I think I will commit myself to that.

dybypy110

Isn't this horses?

It sounds like you want a way to set up a system which autonomously creates a product that fills some certain parameters that you set.  In this case you want self building/designing cars.  

I feel like the alien perspective already answers your question of how.  Car manufacturers are already these self modifying car building systems, with humans as an integral part.

I don't think we can get to "growing cars" without humans as the substrate until something else is able to fill that role such as AI, or in the case of horses, natural selection.

dybypy40

Could the time worked be the number you gamify? Though that might incentivize taking no breaks.

dybypy90

I'm not sure what points can actually be made by this line of reasoning.  What I mean is taking the naive view throws out any useful information and leaves us with the status quo.

And you conclude with boycott-itarianism + the status quo essentially with each side cancelling each other out in all other subgroups.  The boycott-itarianism addition comes mainly from the assumption of utilitarianism (since we're talking about EA and all).

Let's use this reasoning in a different situation e.g. human population.

Whether a human life is net negative or net positive seems like a very difficult question. So naively there should be a 50/50 split (minus any arguments for a better prior).

50% we should reduce the human population

50% we should increase the human population

we can have similar splits

with each side having its die hards (euthanasia vs octo-moms/dads) 

handshakes ("I'll kill one less person if you have one less kid" and vice versa)

welfarists (we should increase the welfare of current humans)

anything (normal 2.5 kids)

which adds up to the status quo + the welfarists.

it'll always be the status quo + the moderates if you give them equal splits and assume the utilitarian position at the offset.

dybypy10

So you got better at management by dropping most of the programming work you used to do and instead delegating it to others?