Proposal: consolidate meetup announcements before promotion

11 CarlShulman 03 May 2011 01:34AM

The Less Wrong feed is getting crowded with meetups rather than substantive posts. Hopefully, this should be fixed in the redesign, but one way to work around it in the meanwhile would be to make top-level posts announcing several meetups at once.

Folk would post meetups under the 'NEW' category, and each week or even every several days one of the meetup organizers could edit her post to announce all the meetups since the last consolidated post. This would greatly reduce the cluster while still getting meetups in the main feed. On the other hand, it would reduce average warning time before meetups, and the additional activation energy might deter some meetups.

If you have thoughts on the workability of this scheme, or an adjustment to make it workable, please comment below.

[HT: Anna Salamon]

Future of Humanity Institute hiring postdocs from philosophy, math, CS

4 CarlShulman 02 February 2011 12:39AM

The application deadline for the two Research Fellowships with the new Programme on the Impacts of Future Technology, has been extended to 21 February.

The posts are:

1. Postdoctoral Research Fellowship, with emphasis on philosophy
2. Postdoctoral Research Fellowship, with emphasis on computer science, cognitive science, or mathematics

Further details at: www.fhi.ox.ac.uk/get_involved/future_tech_vacancies/futuretech.

 

Future of Humanity Institute at Oxford hiring postdocs

6 CarlShulman 24 November 2010 09:40PM

Probability and Politics

17 CarlShulman 24 November 2010 05:02PM

Follow-up toPolitics as Charity

Can we think well about courses of action with low probabilities of high payoffs?  

Giving What We Can (GWWC), whose members pledge to donate a portion of their income to most efficiently help the global poor, says that evaluating spending on political advocacy is very hard:

Such changes could have enormous effects, but the cost-effectiveness of supporting them is very difficult to quantify as one needs to determine both the value of the effects and the degree to which your donation increases the probability of the change occurring. Each of these is very difficult to estimate and since the first is potentially very large and the second very small [1], it is very challenging to work out which scale will dominate.

This sequence attempts to actually work out a first approximation of an answer to this question, piece by piece. Last time, I discussed the evidence, especially from randomized experiments, that money spent on campaigning can elicit marginal votes quite cheaply. Today, I'll present the state-of-the-art in estimating the chance that those votes will directly swing an election outcome.

Disclaimer

Politics is a mind-killer: tribal feelings readily degrade the analytical skill and impartiality of otherwise very sophisticated thinkers, and so discussion of politics (even in a descriptive empirical way, or in meta-level fashion) signals an increased probability of poor analysis. I am not a political partisan and am raising the subject primarily for its illustrative value in thinking about small probabilities of large payoffs.

continue reading »

Nils Nilsson's AI History: The Quest for Artificial Intelligence

13 CarlShulman 31 October 2010 07:33PM

I just noticed that AI pioneer and former Association for the Advancement of Artificial Intelligence (AAAI) head Nils Nilsson, has published his history of AI, The Quest for Artificial Intelligence: A History of Ideas and Achievements. The book is available as a free pdf from his website, with the pay version on Amazon, with reviews

Politics as Charity

29 CarlShulman 23 September 2010 05:33AM

Related toShut up and multiplyPolitics is the mind-killerPascal's MuggingThe two party swindleThe American system and misleading labelsPolicy Tug-of-War 

Jane is a connoisseur of imported cheeses and Homo Economicus in good standing, using a causal decision theory that two-boxes on Newcomb's problem. Unfortunately for her, the politically well-organized dairy farmers in her country have managed to get an initiative for increased dairy tariffs on the ballot, which will cost her $20,000. Should she take an hour to vote against the initiative on election day? 

She estimates that she has a 1 in 1,000,000 chance of casting the deciding vote, for an expected value of $0.02 from improved policy. However, while Jane may be willing to give her two cents on the subject, the opportunity cost of her time far exceeds the policy benefit, and so it seems she has no reason to vote.

Jane's dilemma is just the standard Paradox of Voting in political science and public choice theory. Voters may still engage in expressive voting to affiliate with certain groups or to signal traits insofar as politics is not about policy, but the instrumental rationality of voting to bring about selfishly preferred policy outcomes starts to look dubious. Thus many of those who say that we rationally ought to vote in hopes of affecting policy focus on altruistic preferences: faced with a tiny probability of casting a decisive vote, but large impacts on enormous numbers of people in the event that we are decisive, we should shut up and multiply, voting if the expected value of benefit to others sufficiently exceeds the cost to ourselves.

Meanwhile, at the Experimental Philosophy blog, Eric Schwitzgebel reports that philosophers overwhelmingly rate voting as very morally good (on a scale of 1 to 9), with voting placing right around donating 10% of one's income to charity. He offers the following explanation:

continue reading »

Singularity Call For Papers

7 CarlShulman 10 April 2010 04:08PM

Amnon Eden has sent out this call for papers on technological singularity, which many Less Wrongers may be interested in. I presented at last year's conference, which was a good experience with many interesting people. Submitting good papers can help to legitimate and cultivate the field and thus reduce existential risk (although of course poor work could have the reverse effect). If you have an idea or a draft that you're not sure about, and would like to discuss it before submitting, I'd be happy to help if you contact me (carl DOT shulman AT gmail).

I am also told that the Singularity Institute may be able to provide travel funding for selected papers. Email annasalamon@intelligence.org for more information. 

continue reading »

December 2009 Meta Thread

6 CarlShulman 17 December 2009 03:41AM

This post is a place to discuss meta-level issues regarding Less Wrong. Such posts may or may not be the unique venue for such discussion in the future.

Boston Area Less Wrong Meetup: 2 pm Sunday October 11th

4 CarlShulman 07 October 2009 09:15PM

There will be a Less Wrong meet-up this Sunday, October 11th, 2 pm, in Cambridge at the Central Square Starbucks Coffee at 655 Massachusetts Avenue (time and place are flexible if anyone has a conflict); please comment if you'd like to attend, or if you have any questions or ideas. Some confirmed attendees include SIAI folk and Less Wrongers Anna Salamon, Steve Rayhawk, Carl Shulman, and Roko Mijc. Also keep your eyes peeled for a probable appearance of expert reductionist Gary Drescher, and a rumored Scott Aaronson sighting.

Feel free to contact me at my first name DOT my last name AT post.harvard.edu or 646-525-5383.
Thanks, and see everyone there!

New Haven/Yale Less Wrong Meetup: 5 pm, Monday October 12

3 CarlShulman 07 October 2009 08:35PM

Posted on behalf of Thomas McCabe:

I (Thomas McCabe, a Yale math student) will be hosting a Less Wrong meetup in New Haven, Connecticut, on the Yale University campus. The meetup will take place at 5 PM on Monday, October 12th, at the Yorkside Pizza & Restaurant at 288 York St. (time and place are flexible if anyone has a conflict); please comment if you'd like to attend, or if you have any questions or ideas. The location can be found on Google Maps at this link.

Some confirmed attendees include SIAI folk and Less Wrongers Anna Salamon, Steve Rayhawk, Carl Shulman, and Roko Mijc.

Feel free to contact me at thomas.mccabe@yale.edu, or at 518-248-5525.
Thanks, and see everyone there!

View more: Prev | Next