[Link] Figureheads, ghost-writers and pseudonymous quant bloggers: the recent evolution of authorship in science publishing

2 Gram_Stone 28 September 2016 11:16PM

[Link] Yudkowsky's Guide to Writing Intelligent Characters

4 Vaniver 28 September 2016 02:36PM

Consider having sparse insides

12 AnnaSalamon 01 April 2016 12:07AM

It's easier to seek true beliefs if you keep your (epistemic) identity small. (E.g., if you avoid beliefs like "I am a democrat", and say only "I am a seeker of accurate world-models, whatever those turn out to be".)

It seems analogously easier to seek effective internal architectures if you also keep non-epistemic parts of your identity small -- not "I am a person who enjoys nature", nor "I am someone who values mathematics" nor "I am a person who aims to become good at email" but only "I am a person who aims to be effective, whatever that turns out to entail (and who is willing to let much of my identity burn in the process)".

There are obviously hazards as well as upsides that come with this; still, the upsides seem worth putting out there.

The two biggest exceptions I would personally make, which seem to mitigate the downsides: "I am a person who keeps promises" and "I am a person who is loyal to [small set of people] and who can be relied upon to cooperate more broadly -- whatever that turns out to entail".

 

Thoughts welcome.

Turning the Technical Crank

43 Error 05 April 2016 05:36AM

A few months ago, Vaniver wrote a really long post speculating about potential futures for Less Wrong, with a focus on the idea that the spread of the Less Wrong diaspora has left the site weak and fragmented. I wasn't here for our high water mark, so I don't really have an informed opinion on what has socially changed since then. But a number of complaints are technical, and as an IT person, I thought I had some useful things to say.

I argued at the time that many of the technical challenges of the diaspora were solved problems, and that the solution was NNTP -- an ancient, yet still extant, discussion protocol. I am something of a crank on the subject and didn't expect much of a reception. I was pleasantly surprised by the 18 karma it generated, and tried to write up a full post arguing the point.

I failed. I was trying to write a manifesto, didn't really know how to do it right, and kept running into a vast inferential distance I couldn't seem to cross. I'm a product of a prior age of the Internet, from before the http prefix assumed its imperial crown; I kept wanting to say things that I knew would make no sense to anyone who came of age this millennium. I got bogged down in irrelevant technical minutia about how to implement features X, Y, and Z. Eventually I decided I was attacking the wrong problem; I was thinking about 'how do I promote NNTP', when really I should have been going after 'what would an ideal discussion platform look like and how does NNTP get us there, if it does?'

So I'm going to go after that first, and work on the inferential distance problem, and then I'm going to talk about NNTP, and see where that goes and what could be done better. I still believe it's the closest thing to a good, available technological schelling point, but it's going to take a lot of words to get there from here, and I might change my mind under persuasive argument. We'll see.

Fortunately, this is Less Wrong, and sequences are a thing here. This is the first post in an intended sequence on mechanisms of discussion. I know it's a bit off the beaten track of Less Wrong subject matter. I posit that it's both relevant to our difficulties and probably more useful and/or interesting than most of what comes through these days. I just took the 2016 survey and it has a couple of sections on the effects of the diaspora, so I'm guessing it's on topic for meta purposes if not for site-subject purposes.

Less Than Ideal Discussion

To solve a problem you must first define it. Looking at the LessWrong 2.0 post, I see the following technical problems, at a minimum; I'll edit this with suggestions from comments.

  1. Aggregation of posts. Our best authors have formed their own fiefdoms and their work is not terribly visible here. We currently have limited support for this via the sidebar, but that's it.
  2. Aggregation of comments. You can see diaspora authors in the sidebar, but you can't comment from here.
  3. Aggregation of community. This sounds like a social problem but it isn't. You can start a new blog, but unless you plan on also going out of your way to market it then your chances of starting a discussion boil down to "hope it catches the attention of Yvain or someone else similarly prominent in the community." Non-prominent individuals can theoretically post here; yet this is the place we are decrying as moribund.
  4. Incomplete and poor curation. We currently do this via Promoted, badly, and via the diaspora sidebar, also badly.
  5. Pitiful interface feature set. This is not so much a Less Wrong-specific problem as a 2010s-internet problem; people who inhabit SSC have probably seen me respond to feature complaints with "they had something that did that in the 90s, but nobody uses it." (my own bugbear is searching for comments by author-plus-content).
  6. Changes are hamstrung by the existing architecture, which gets you volunteer reactions like this one.

I see these meta-technical problems:

  1. Expertise is scarce. Few people are in a position to technically improve the site, and those that are, have other demands on their time.
  2. The Trivial Inconvenience Problem limits the scope of proposed changes to those that are not inconvenient to commenters or authors.
  3. Getting cooperation from diaspora authors is a coordination problem. Are we better than average at handling those? I don't know.

Slightly Less Horrible Discussion

"Solving" community maintenance is a hard problem, but to the extent that pieces of it can be solved technologically, the solution might include these ultra-high-level elements:

  1. Centralized from the user perspective. A reader should be able to interact with the entire community in one place, and it should be recognizable as a community.
  2. Decentralized from the author perspective. Diaspora authors seem to like having their own fiefdoms, and the social problem of "all the best posters went elsewhere" can't be solved without their cooperation. Therefore any technical solution must allow for it.
  3. Proper division of labor. Scott Alexander probably should not have to concern himself with user feature requests; that's not his comparative advantage and I'd rather he spend his time inventing moral cosmologies. I suspect he would prefer the same. The same goes for Eliezer Yudkowski or any of our still-writing-elsewhere folks.
  4. Really good moderation tools.
  5. Easy entrance. New users should be able to join the discussion without a lot of hassle. Old authors that want to return should be able to do so and, preferably, bring their existing content with them.
  6. Easy exit. Authors who don't like the way the community is heading should be able to jump ship -- and, crucially, bring their content with them to their new ship. Conveniently. This is essentially what has happened, except old content is hostage here.
  7. Separate policy and mechanism within the site architecture. Let this one pass for now if you don't know what it means; it's the first big inferential hurdle I need to cross and I'll be starting soon enough.

As with the previous, I'll update this from the comments if necessary.

Getting There From Here

As I said at the start, I feel on firmer ground talking about technical issues than social ones. But I have to acknowledge one strong social opinion: I believe the greatest factor in Less Wrong's decline is the departure of our best authors for personal blogs. Any plan for revitalization has to provide an improved substitute for a personal blog, because that's where everyone seems to end up going. You need something that looks and behaves like a blog to the author or casual readers, but integrates seamlessly into a community discussion gateway.

I argue that this can be achieved. I argue that the technical challenges are solvable and the inherent coordination problem is also solvable, provided the people involved still have an interest in solving it.

And I argue that it can be done -- and done better than what we have now -- using technology that has existed since the '90s.

I don't argue that this actually will be achieved in anything like the way I think it ought to be. As mentioned up top, I am a crank, and I have no access whatsoever to anybody with any community pull. My odds of pushing through this agenda are basically nil. But we're all about crazy thought experiments, right?

This topic is something I've wanted to write about for a long time. Since it's not typical Less Wrong fare, I'll take the karma on this post as a referendum on whether the community would like to see it here.

Assuming there's interest, the sequence will look something like this (subject to reorganization as I go along, since I'm pulling this from some lengthy but horribly disorganized notes; in particular I might swap subsequences 2 and 3):

  1. Technical Architecture
    1. Your Web Browser Is Not Your Client
    2. Specialized Protocols: or, NNTP and its Bastard Children
    3. Moderation, Personal Gardens, and Public Parks
    4. Content, Presentation, and the Division of Labor
    5. The Proper Placement of User Features
    6. Hard Things that are Suddenly Easy: or, what does client control gain us?
    7. Your Web Browser Is Still Not Your Client (but you don't need to know that)
  2. Meta-Technical Conflicts (or, obstacles to adoption)
    1. Never Bet Against Convenience
    2. Conflicting Commenter, Author, and Admin Preferences
    3. Lipstick on the Configuration Pig
    4. Incremental Implementation and the Coordination Problem.
    5. Lowering Barriers to Entry and Exit
  3. Technical and Social Interoperability
    1. Benefits and Drawbacks of Standards
    2. Input Formats and Quoting Conventions
    3. Faking Functionality
    4. Why Reddit Makes Me Cry
    5. What NNTP Can't Do
  4. Implementation of Nonstandard Features
    1. Some desirable feature #1
    2. Some desirable feature #2
    3. ...etc. This subsequence is only necessary if someone actually wants to try and do what I'm arguing for, which I think unlikely.

(Meta-meta: This post was written in Markdown, converted to HTML for posting using Pandoc, and took around four hours to write. I can often be found lurking on #lesswrong or #slatestarcodex on workday afternoons if anyone wants to discuss it, but I don't promise to answer quickly because, well, workday)

[Edited to add: At +10/92% karma I figure continuing is probably worth it. After reading comments I'm going to try to slim it down a lot from the outline above, though. I still want to hit all those points but they probably don't all need a full post's space. Note that I'm not Scott or Eliezer, I write like I bleed, so what I do post will likely be spaced out]

Lesswrong 2016 Survey

29 Elo 30 March 2016 06:17PM

It’s time for a new survey!

Take the survey now


The details of the last survey can be found here.  And the results can be found here.

 

I posted a few weeks back asking for suggestions for questions to include on the survey.  As much as we’d like to include more of them, we all know what happens when we have too many questions. The following graph is from the last survey.


http://i.imgur.com/KFTn2Bt.png

KFTn2Bt.png

(Source: JD’s analysis of 2014 survey data)


Two factors seem to predict if a question will get an answer:

  1. The position

  2. Whether people want to answer it. (Obviously)


People answer fewer questions as we approach the end. They also skip tricky questions. The least answered question on the last survey was - “what is your favourite lw post, provide a link”.  Which I assume was mostly skipped for the amount of effort required either in generating a favourite or in finding a link to it.  The second most skipped questions were the digit-ratio questions which require more work, (get out a ruler and measure) compared to the others. This is unsurprising.


This year’s survey is almost the same size as the last one (though just a wee bit smaller).  Preliminary estimates suggest you should put aside 25 minutes to take the survey, however you can pause at any time and come back to the survey when you have more time.  If you’re interested in helping process the survey data please speak up either in a comment or a PM.


We’re focusing this year particularly on getting a glimpse of the size and shape of the LessWrong diaspora.  With that in mind; if possible - please make sure that your friends (who might be less connected but still hang around in associated circles) get a chance to see that the survey exists; and if you’re up to it - encourage them to fill out a copy of the survey.


The survey is hosted and managed by the team at FortForecast, you’ll be hearing more from them soon. The survey can be accessed through http://lesswrong.com/2016survey.


Survey responses are anonymous in that you’re not asked for your name. At the end we plan to do an opt-in public dump of the data. Before publication the row order will be scrambled, datestamps, IP addresses and any other non-survey question information will be stripped, and certain questions which are marked private such as the (optional) sign up for our mailing list will not be included. It helps the most if you say yes but we can understand if you don’t.  


Thanks to Namespace (JD) and the FortForecast team, the Slack, the #lesswrong IRC on freenode, and everyone else who offered help in putting the survey together, special thanks to Scott Alexander whose 2014 survey was the foundation for this one.


When answering the survey, I ask you be helpful with the format of your answers if you want them to be useful. For example if a question asks for an number, please reply with “4” not “four”.  Going by the last survey we may very well get thousands of responses and cleaning them all by hand will cost a fortune on mechanical turk. (And that’s for the ones we can put on mechanical turk!) Thanks for your consideration.

 

The survey will be open until the 1st of may 2016

 


Addendum from JD at FortForecast: During user testing we’ve encountered reports of an error some users get when they try to take the survey which erroneously reports that our database is down. We think we’ve finally stamped it out but this particular bug has proven resilient. If you get this error and still want to take the survey here are the steps to mitigate it:

 

  1. Refresh the survey, it will still be broken. You should see a screen with question titles but no questions.

  2. Press the “Exit and clear survey” button, this will reset your survey responses and allow you to try again fresh.

  3. Rinse and repeat until you manage to successfully answer the first two questions and move on. It usually doesn’t take more than one or two tries. We haven’t received reports of the bug occurring past this stage.


If you encounter this please mail jd@fortforecast.com with details. Screenshots would be appreciated but if you don’t have the time just copy and paste the error message you get into the email.

 

Take the survey now


Meta - this took 2 hours to write and was reviewed by the slack.


My Table of contents can be found here.

Look for Lone Correct Contrarians

20 Gram_Stone 13 March 2016 04:11PM

Related to: The Correct Contrarian Cluster, The General Factor of Correctness

(Content note: Explicitly about spreading rationalist memes, increasing the size of the rationalist movement, and proselytizing. I also regularly use the word 'we' to refer to the rationalist community/subculture. You might prefer not to read this if you don't like that sort of thing and/or you don't think I'm qualified to write about that sort of thing and/or you're not interested in providing constructive criticism.)

I've tried to introduce a number of people to this culture and the ideas within it, but it takes some finesse to get a random individual from the world population to keep thinking about these things and apply them. My personal efforts have been very hit-or-miss. Others have told me that they've been more successful. But I think there are many people that share my experience. This is unfortunate: we want people to be more rational and we want more rational people.

At any rate, this is not about the art of raising the sanity waterline, but the more general task of spreading rationalist memes. Some people naturally arrive at these ideas, but they usually have to find them through other people first. This is really about all of the people in the world who are like you probably were before you found this culture; the people who would care about it, and invest in it, as it is right now, if only they knew it existed.

I'm going to be vague for the sake of anonymity, but here it goes:

I was reading a book review on Amazon, and I really liked it. The writer felt like a kindred spirit. I immediately saw that they were capable of coming to non-obvious conclusions, so I kept reading. Then I checked their review history in the hope that I would find other good books and reviews. And it was very strange.

They did a bunch of stuff that very few humans do. They realized that nuclear power has risks but that the benefits heavily outweigh the risks given the appropriate alternative, and they realized that humans overestimate the risks of nuclear power for silly reasons. They noticed when people were getting confused about labels and pointed out the general mistake, as well as pointing out what everyone should really be talking about. They acknowledged individual and average IQ differences and realized the correct policy implications. They really understood evolution, they took evolutionary psychology seriously, and they didn't care if it was labeled as sociobiology. They used the word 'numerate.'

And the reviews ranged over more than a decade of time. These were persistent interests.

I don't know what other people do when they discover that a stranger like this exists, but the first thing that I try to do is talk to them. It's not like I'm going to run into them on the sidewalk.

Amazon had no messaging feature that I could find, so I looked for a website, and I found one. I found even more evidence, and that's certainly what it wasThey were interested in altruism, including how it goes wrong; computer science; statistics; psychology; ethics; coordination failures; failures of academic and scientific institutions; educational reform; cryptocurrency, etc. At this point I considered it more likely than not that they already knew everything that I wanted to tell them, and that they already self-identified as a rationalist, or that they had a contrarian reason for not identifying as such.

So I found their email address. I told them that they were a great reviewer, that I was surprised that they had come to so many correct contrarian conclusions, and that, if they didn't already know, there was a whole culture of people like them.

They replied in ten minutes. They were busy, but they liked what I had to say, and as a matter of fact, a friend had already convinced them to buy Rationality: From AI to Zombies. They said they hadn't read much relative to the size of the book because it's so large, but they loved it so far and they wanted to keep reading.

(You might postulate that I found a review by a user like this on a different book because I was recommended this book and both of us were interested in Rationality: From AI to Zombies. However, the first review I read by this user was for a book on unusual gardening methods, that I found in a search for books about gardening methods. For the sake of anonymity, however, my unusual gardening methods must remain a secret. It is reasonable to postulate that there would be some sort of sampling bias like the one that I have described, but given what I know, it is likely that this is not that. You certainly could still postulate a correlation by means of books about unusual gardening methods, however.)

Maybe that extra push made the difference. Maybe if there hadn't been a friend, I would've made the difference.

Who knew that's how my morning would turn out?

As I've said in some of my other posts, but not in so many words, maybe we should start doing this accidentally effective thing deliberately!

I know there's probably controversy about whether or not rationalists should proselytize, but I've been in favor of it for awhile. And if you're like me, then I don't think this is a very special effort to make. I'm sure sometimes you see a little thread, and you think, "Wow, they're a lot like me; they're a lot like us, in fact; I wonder if there are other things too. I wonder if they would care about this."

Don't just move on! That's Bayesian evidence!

I dare you to follow that path to its destination. I dare you to reach out. It doesn't cost much.

And obviously there are ways to make yourself look creepy or weird or crazy. But I said to reach out, not to reach out badly. If you could figure out how to do it right, it could have a large impact. And these people are likely to be pretty reasonable. You should keep a look out in the future.

Speaking of the future, it's worth noting that I ended up reading the first review because of an automated Amazon book recommendation and subsequent curiosity. You know we're in the data. We are out there and there are ways to find us. In a sense, we aren't exactly low-hanging fruit. But in another sense, we are.

I've never read a word of the Methods of Rationality, but I have to shoehorn this in: we need to write the program that sends a Hogwarts acceptance letter to witches and wizards on their eleventh birthday.

Black box knowledge

2 Elo 03 March 2016 10:40PM

When we want to censor an image we put a black box over it.  Over the area we want to censor.  In a similar sense we can purposely censor our knowledge.  This comes in particular handiness when thinking about things that might be complicated but we don't need to know.


A deliberate black box around how toasters work would look like this:  

bread -> black box -> toast

Not all processes need knowing, for now a black box can be a placeholder for the future.  


With the power provided to us by a black box, we can identify what we don't know.  We can say; Hey!  I don't know what a toaster is but it would be about 2 hours to work it out.  if I ever did want to work it out, I could just spend two hours to do it.  Until then; I saved myself two hours.  If we take other more time-burdensome fields it works even better.  Say tax.

Need to file tax -> black box accountant -> don't need to file my tax because I got the accountant to do it for me.

I know I can file my own tax, but that might be 100-200 hours of knowing everything an accountant knows about tax.  (It also might be 10 hours depending on your country and their tax system).  For now I can assume that hiring an accountant saved me a number of hours in doing it myself.  So - Winning!


Take car repairs.  On the one hand; you could do it yourself and unpack the black box, or you could trade your existing currency  $$ (which you already traded your time to earn) for someone else's skills and time to repair the car.  The system looks like this:

Broken car -> black box mechanic -> working car

By deliberately not knowing how it works; we can tap out of even trying to figure it out for now.  The other advantage is that we can look at; not just what we know in terms of black boxes but more importantly what we don't know.  We can build better maps by knowing what we don't know.


Computers:

Logic gates -> Black box computeryness -> www.lesswrong.com

Or maybe it's like this: (for more advanced users)

Computers: 

Logic gates -> flip flops -> Black box CPU -> black box GPU -> www.lesswrong.com


The black-box system happens to also have a meme about it:

Step 1. Get out of bed

Step 2. Build AGI

Step 3. ?????

Step 4. Profit

Only now we have a name for deliberately skipping finding out how step 3 works.


Another useful system:

Dieting

Food in (weight goes up) -> black box human body -> energy out (weight goes down)


Make your own black box systems in the comments.


Meta: short post, 1.5 hour to write, edit and publish. Felt it was an idea that provides useful ways to talk about things.  Needed it to explain something to someone, now all can enjoy!

My Table of contents has my other writings in it.

All suggestions and improvements welcome!

Procrastination checklist

4 Elo 03 March 2016 03:04AM

Procrastination checklist

This list is a revision of this checklist: http://lesswrong.com/lw/hgd/10step_antiprocrastination_checklist/


1. What is the task? Make sure you're going to focus on one thing at a time.  Write it down (helps some people).  (If you need - start with the big picture, one sentence of "what is this for")


Can you do it now? (If yes then do it)


2. How long will you work until you take a break?  Prepare to set a timer and commit to focusing.


Can you do it now? (If yes then do it)


3. What are the parts to this task?  Break things down until they are in *can do it now* steps, if you have a small number of steps that can now be done; stop writing more steps and start doing them.


Can you do it right now?  (If yes then do it)


4. What's an achievable goal for this sitting? Set a reasonable expectation for yourself.  (until it's done, 1000 words, complete research on X part)


Can you do it now? (If yes then do it)


5. How can you make it easier to do the task?

  • Is the environment right?  Desk clear, well lit area...

  • Do you have something to drink? Get yourself some tea, coffee, or water.

  • Are distractions closed? Shut the door, quit Tweetdeck, close the Facebook and Gmail tabs, and set skype to "Do not disturb."

  • What music will you listen to inspire yourself to be productive? Put on a good instrumental playlist! (video game soundtracks are good)

  • Do you have the right books open?  The right tools in reach?

  • Is your chair comfortable?

  • Can you make it harder to do the distracting or <not this> thing?

  • (step 3 is going to help to make it easier)


Can you do it now? (If yes then do it)


6. Why are you doing this task?  Trace the value back until you increase the desire to do it.


Can you do it now? (If yes then do it)


7. Will gamifying help you? What are some ways to gamify the task?  Try to have fun with it!


Can you do it now? (If yes then do it)


8. What are some rewards you can offer yourself for completing sections of the task? Smiling, throwing your arms up in the air and proclaiming victory, or M&M's all count, a trip to the beach, a nice milkshake...


Can you do it now? (If yes then do it)


9. are you sure you want to do it?  Deciding either to; not do it now; or not do it at all; are also fine.  It’s up to you to make that decision, keeping in mind what “not doing it” means in it’s entirety.



In first-person form:

1. What is the task? Make sure I’m going to focus on one thing at a time.  Write it down (helps some people).  (If I need - start with the big picture, one sentence of "what is this for")


Can I do it now? (If yes then do it)


2. How long will I work until you take a break?  Prepare to set a timer and commit to focusing.


Can I do it now? (If yes then do it)


3. What are the parts to this task?  I want to break things down until they are in *can do it now* steps, if I have a small number of steps that can now be done; I will stop writing more steps in the process and start doing them.


Can I do it right now?  (If yes then do it)

 

4. What's an achievable goal for this sitting? Set a reasonable expectation for myself.  (until it's done, 1000 words, complete research on X part)


Can I do it now? (If yes then do it)


5. How can I make it easier to do the task?

  • Is the environment right?  Desk clear, well lit area...

  • Do I have something to drink? Get yourself some tea, coffee, or water.

  • Are my distractions closed? Shut the door, quit Tweetdeck, close the Facebook and Gmail tabs, set skype to "Do not disturb."

  • What music will I listen to, to inspire myself to be productive? Put on a good instrumental playlist!

  • Do I have the right books open?  The right tools in reach?

  • Is my chair comfortable?

  • Can I make it harder to do the distracting or <not this> thing?

  • (step 3 is going to help to make it easier)


Can I do it now? (If yes then do it)


6. Why am I doing this task?  Trace the value and feeling back until I increase the desire to do it.


Can I do it now? (If yes then do it)


7. Will gamifying help me? What are some ways to gamify the task?


Can I do it now? (If yes then do it)


8. What are some rewards I can offer myself for completing sections of the task? Smiling, throwing my arms up in the air and proclaiming victory, M&M's all count, a trip to the beach, a nice milkshake...


Can I do it now? (If yes then do it)


9. am I sure I want to do it?  Deciding either to - not do it now; or not do it at all; are also fine.  It’s up to me to make that decision, keeping in mind what “not doing it” means in terms of the task at hand.


Meta: This took about 2 hours to put together; between writing, rewriting, reordering, editing feedback and publishing.

I couldn't decide whether 2nd person or 1st person was better so I wrote both.  Please let me know which you prefer.

Any adjustments or suggestions are welcome.

My table of contents is where you will find the other things I have written.

feedback on if this works or helps is also welcome.

The ethics of eating meat

6 necate 17 February 2016 07:03PM

I have grown up in a family of meat-eaters and therefore have been eating meat all my life. I until recently I have never spent much time thinking about it. I justified my behaviour by saying that animal lives do not matter, because they are not self-conscious and animal pain does not matter, because they have no memory of pain and therefore, as soon as the actual pain is over it is like it has never happened.

In the recent weeks I have spent some time to properly think this through and form an informed believe about whether I can justify eating meat. I would like to hear your thoughts about my thought process and results, because this is a decision that I really don’t want to get wrong.

I have Identified 5 possible problems with meat consumption.

  1. Meat requires us to kill animals. 
  2.  Factory farmed animals are in a considerable amount of pain for most of their life.
  3.  Meat productions requires much more space than producing plants, and therefore might contribute to the world hunger
  4.  Some Studies claim that meat, especially if factory farmed, is unhealthy.
  5. Meat production is bad for the environment (partly because of point 4, but also for other reasons)

I have decided to ignore problems 4-5 at the beginning, because admitting that they are true would impose weaker restrictions on me. If I come to the conclusion, that I don’t want to eat meat for reason 1, I could no longer eat any meat and reason 2 would forbid me to eat factory farmed meat, which would essentially bring my meat consumption down to something close to zero. 

Reasons 4 and 5 would limit my meat consumption far less, since I do lots of other things that are unhealthy (like eating candy and snacks) or harmful to the environment (like traveling by plane) and while I might come to the conclusion that I want to reduce my meat consumption for reasons 4-5, I expect to have many situations left, where eating meat gives me enough utility to still do it in spite of that reasons.

Reason 3 would also be important, but I am fairly sure, that the problem mostly lies with the lack of spending power in poorer countries, and that it will not lead to more food in Africa if I stop eating meat. For that reason I did not do further research on this.

So what I did was to think about problems 1 and 2 and decide to revisit 4 and 5 if I come to the conclusion that 1 and 2 still allow me to continue eating meat like I do now. 

Is it justifiable to kill animals?

It is clear to me that it is wrong to kill a Human being with a not significantly damaged brain. It is also clear that I have absolutely no problems with killing bacteria or other very simple living beings. Therefore there must exist some features besides the fact that they live that a human has and a bacterium has not, that divides living beings into things that I am willing to kill and things that I am not willing to kill.

The criterion that I used up to know was self-consciousness, which is very convenient because it puts the line between humans (and likely great apes as well) on one side, and basically everything I want to eat on the other side.

There are quite a few things that justify this criterion such as:

  1. From a preference utilitarian Perspective, only a self-conscious being can have preferences for the future, therefore you can only violate the preferences of a self-conscious being by killing it. This would be a knock down argument under the premise that preference utilitarism (and not for example normal utilitarism) is the ethical principle to go with 
  2. Although I am no expert in this field I believe that it is relatively easy to build a virtual being (for example in a computer game) or with a bit more effort even a robot, that behaves in the way that leads current researchers come to the conclusion that animals have some kind Of Utility. I count the fact that it is easy to build such a thing as evidence, that animals might function in a similar way and I would not have a problem with “hurting” this virtual thing. Therefor if Animals work this way I have no problem with hurting them.
  3. This explanation from Eliezer: https://m.facebook.com/yudkowsky/posts/10152588738904228  which I will come back to when I talk about pain, but which is relevant here as well. (Might to some degree be similar to my  point 2)  

 

There are however other Arguments against it. 

 

  1. Some animals do things that are far more complex than reacting to pain and simple pleasures such as forming relationships for life or mourning if a group member dies. Those things require a more developed brain and are features that most people would see as characteristic for Humans. Since the fact that we kill animals but not humans must come from differences between them, the similar both are, the less likely it is that treating them differently is justified. 
  2. From a certain utilitarian perspective (Namely the one that cares about utility of existing beings but not about none existing beings it would be wrong to kill animals with positive utility. And since if animals can have utility it would obviously be wrong to breed them and make their life miserable so that they have negative utility, this would mean that we could not kill animals

 

I find the arguments against killing animals to be far weaker, since I do not follow the particular form of utilitarism that supports them and since I cannot really explain why the features I named under 1 should forbid me to kill animals. In addition to that I count the fact that Peter Singer, who is against all killing of animals and is arguably a pretty clever person has found no better way to justify his statement, that one should not kill animals at all, than the idea that this will lead us to continue to objectify them and ignore their pain. Since Singer has found no better reason and he probably spent a lot of time doing it, it is likely that there is none.

Although I am fairly confident, that killing Animals is in line with my ethical believes I still see some trouble. If I am wrong on this this might be an incredible harmful decision, since it will lead to the death of many animals (probably hundreds of them, if I don’t reduce my meat consumption for other reasons). Therefore I have to be incredibly confident that I have not overlooked something in order to continue to eat meat. And I have limited time and probably a strong motivation to come to the conclusion that meat eating is okay, which clouds my judgement. I feel that I need more evidence. As far as I know there are lots of meat eaters here and some of them will have thought about this. Why are you so confident that animal life’s do not matter? Is it that I overlooked major arguments or is the self-consciousness just a more of a knock down argument than I think?

Animals and Pain

It is relatively well established that animals show reactions that one could associate with pain and they have a nerve system that allows pain. Singer has proclaimed that in his 1975 book Animal Liberation for mammals and birds and cited research on it, and as far as I know no one has really corrected him on that. I also found papers that claim the same for fish and lobsters and I have not found any counterevidence. So the question that remains is, do animals get negative utility from pain, and do they have utility functions at all.

Eliezer Argues in this post https://m.facebook.com/yudkowsky/posts/10152588738904228 that they don’t have utility. I can understand his model, but I could also imagine that an animal mind works in other ways. I am no expert in evolutionary biology, but as far as I know, the mainstream opinion among scientists right now is that animals have pain.

There is for Example the Cambridge declaration of conciousness (http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf). It might have a different understanding of the word consciousness compared to the one which I think is most popular among the lesswrong community (Consciousness as being aware of its own existence), but it clearly states that animals have affective states and therefor utility. If animals can suffer pain, than factory farming is incredibly wrong. I would therefore have to be very certain (surely above 99% confidence) of the fact that they don’t or I cannot justify to eat factory farmed meat. The question is: How can I be so sure if a significant amount of experts are of a different opinion. Does anyone have any actual research on the topic that explains the reasons why animals do not have utility in more detail than Eliezer did? Basically I would need something that not only explains why this is a plausible hypothesis but something that explain why they could not possibly have evolved in a way that they feel pain. So basically, why a pig that feels pain makes no sense from an evolutionary perspective.

If my current believes don’t shift anymore I will stop eating factory farmed meat, but not stop to eat any meat at all. I would be happy about any additional evidence, or about oppinions on the conclusions I draw from my evidence.

 

 






 

 

 

Should we admit it when a person/group is "better" than another person/group?

0 adamzerner 16 February 2016 09:43AM

This sort of thinking seems bad:

me.INTRINSIC_WORTH = 99999999; No matter what I do, this fixed property will remain constant.

This sort of thinking seems socially frowned upon, but accurate:

a.impactOnSociety(time) > b.impactOnSociety(time)

a.qualityOfCharacter > b.qualityOfCharacter // determined by things like altruism, grit, courage, self awareness...

Similar points could be made by replacing a/b with [group of people]. I think it's terrible to say something like:

This race is inherently better than that race. I refuse to change my mind, regardless of the evidence brought before me.

But to me, it doesn't seem wrong to say something like:

Based on what I've seen, I think that the median member of Group A has a higher qualityOfCharacter than the median member of Group B. I don't think there's anything inherently better about Group A. It's just based on what I've observed. If presented with enough evidence, I will change my mind.

Credit and accountability seem like good things to me, and so I want to live in a world where people/groups receive credit for good qualities, and are held accountable for bad qualities.

I'm not sure though. I could see that there are unintended consequences of such a world. For example, such "score keeping" could lead to contentiousness. And perhaps it's just something that we as a society (to generalize) can't handle, and thus shouldn't keep score.

View more: Next