You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

The Talos Principle

2 kranalee 21 February 2016 07:37PM

Dear members of Less Wrong, this is my very first contribution to your society and I hope that you might help me to get out of my confusion.

 

Back a few months ago, I tested for the first time a video game created by Croteam Studio which is called 'The Talos Principle'.

 

At the time, i was astonished by all the philosophical questions that the game was rising. It has kinda changed the way I see the world now, also the way I see myself.

 

I wanted to share my thoughts with you on the subject of 'What does being a Human mean ?'

 

 

 

First, I'd like to introduce you to this principle.

 

In Greek mythology, Talos was a giant automaton made of bronze which protected Europa in Crete from pirates and invaders.

 

He was known to be a gift given to Europa by Zeus himself.

 

He was so strong that he could crush a man's skull using only one hand, and so tall that he could circle the island's shores three times daily.

 

He was able to talk, think and act like he wanted to. (Except he had to obey Europa's will)

 

Even though his body was not organic, he was composed of a liquid-metal flowing through his veins who behaved like blood.

 

 

 

And here is how the principle begins. What is the fundamental difference between Talos and us, Human ?

 

Considering the fact that like us, he's able to think by himself, move thanks to his will and communicate like everybody does. Is he really different from us ? Sharing our own culture, history and language don't make him Human as well ?

 

I'm pretty sure that your first thought might be 'No way ! We are part of a biological specie. We have nothing in common with a synthetic being'.

 

But does our body really defines us as a Human Being ?

 

From a strict biological point of view, Sir Darwin would say yes, of course. And we won't be able to argue with that.

 

But if you take a Human being, for instance Platon, and you just cut his leg off and replace it with a synthetic prosthesis.

 

Would this person still be Platon ?

 

It appears that the answer to this question is yes, according to all the people who suffered from any kind of accidents which led them to give up a part of their body.

 

They were still the same. Of course they suffered from phantom pains and others psychological damages, but in the end, they remain the same as before.

 

Let's get back to our example. Now imagine that this synthetic-leg-equipped-Platon just had an accident that has made him lose his right arm. Profused with empathy, you accept to give him a prosthethic one.

 

Now, would this person still be Platon ?

 

Again, the answer is yes. Indeed, these accidents would not leave a man without leaving any kind of trauma, but he is still able to think and act like a normal Human. Thus we are assuming that he's still one of us, and that he's still himself.

 

So, how many times do we have to repeat the process in order to touch something that we can't exchange with anything synthesis in order to preserve Platon's Humanity (and sanity).

 

The answer appears to be the brain.

 

Deleting Brain remains the same as deleting our being. We can live with artificial heart, lungs, stomach, etc. but we can't live without our natural brain.

 

 

 

The brain is one of the biggest unknowns in the Human body. Doctors are claiming that we only know less of the half of how does the brain work, mystify it by the same time.

 

But still, we can resume the brain to its physical material. Estimated to contain 15-33 billion neurons each connected by synapses to several thousand other neurons which communicate with one another by means of long protoplasmic fibers called axons carrying trains of signal pulses called action potentials to distant parts of the brain or body targeting specific recipient cells.

 

Indeed, even if we do not really know for sure how every cell interacts with others we know that everything is bounded by chemistry. Every kind of information transfer can be reduced to a chemical reaction, something physical.

 

Every thought of our being started and ended with a chemical reaction. And we know how to replace a chemical reaction by another. We know how to simulate a potential transfer and thus we are today able to simulate a very simple brain on a computer.

 

( You may want to check the Blue Brain Project which illustrates everything that i'm writing. This simulation does not consist simply of an artificial neural network but involves a biologically realistic model of neurons )

 

 

So if in a close future we are able to simulate correctly a Human's brain, and therefore a whole Human body as well, can we considerate it as a Human being ?

 

Being aware of the material reality of the brain might make you think twice about yourself and your specie in general.

 

How do you describe a human being now ? Would you describe Talos as a human being as well ? Or just call it a being, refusing to give him the title of 'Human' because of the biological difference between you and it ? Therefore, can a man entirely simulated in a computer still be called human ?

 

Also, do not forget how the body influences the brain. Just look back on what happened to you during puberty, when sex desire overwhelmed you, making you impossible to remain calm. This happened thanks to chemicals, but it's still very interesting to see how a single chemical can have a huge influence on your consciousness.

 

I'm for now in a haze, so instead of lying on my bed thinking, i'd rather ask for your point of view. I'm very curious, would you kindly give it to me ?

 

Thanks for reading it all, I'll see your reactions in the comment section below.

 

[By the way, i'm a 19 years old french engineering student, i beg for your pardon concerning my english expression]

New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK)

9 Sean_o_h 13 October 2015 11:11AM

[Cross-posted from EA Forum. Summary: Four new postdoc positions at the Centre for the Study of Existential Risk: Evaluation of extreme technological risk (philosophy, economics); Extreme risk and the culture of science (philosophy of science); Responsible innovation and extreme technological risk (science & technology studies, sociology, policy, governance); and an academic project manager (cutting across the Centre’s research projects, and playing a central role in Centre development). Please help us to spread the word far and wide in the academic community!]

 

An inspiring first recruitment round

The Centre for the Study of Existential Risk (Cambridge, UK) has been making excellent progress in building up our research team. Our previous recruitment round was a great success, and we made three exceptional hires. Dr Shahar Avin joined us in September from Google, with a background in the philosophy of science (Cambridge, UK). He is currently fleshing out several potential research projects, which will be refined and finalised following a research visit to FHI later this month. Dr Yang Liu joined us this month from Columbia University, with a background in mathematical logic and philosophical decision theory. Yang will work on problems in decision theory that relate to long-term AI, and will help us to link the excellent work being done at MIRI with relevant expertise and talent within academia. In February 2016, we will be joined by Dr Bonnie Wintle from the Centre of Excellence for Biosecurity Risk Analysis (CEBRA), who will lead our horizon-scanning work in collaboration with Professor Bill Sutherland’s group at Cambridge; among other things, she has worked on IARPA-funded development of automated horizon-scanning tools, and has been involved in the Good Judgement Project.

We are very grateful for the help of the existential risk and EA communities in spreading the word about these positions, and helping us to secure an exceptionally strong field. Additionally, I have now moved on from FHI to be CSER’s full-time Executive Director, and Huw Price is now 50% funded as CSER’s Academic Director (we share him with Cambridge’s Philosophy Faculty, where he remains Bertrand Russell Chair of Philosophy).

Four new positions:

We’re delighted to announce four new positions at the Centre for the Study of Existential Risk; details below. Unlike the previous round, where we invited project proposals from across our areas of interest, in this case we have several specific positions that we need to fill for our three year Managing Extreme Technological Risk project, funded by the Templeton World Charity Foundation; details are provided below. As we are building up our academic brand within a traditional university, we expect to predominantly hire from academia, i.e. academic researchers with (or near to the completion of) PhDs. However, we are open to hiring excellent candidates without candidates but with an equivalent and relevant level of expertise, for example in think tanks, policy settings or industry.

Three of these positions are in the standard academic postdoc mould, working on specific research projects. I’d like to draw attention to the fourth, the academic project manager. For this position, we are looking for someone with the intellectual versatility to engage across our research strands – someone who can coordinate these projects, synthesise and present our research to a range of audiences including funders, collaborators, policymakers and industry contacts. Additionally, this person will play a key role in developing the centre over the next two years, working with our postdocs and professorial advisors to secure funding, and contributing to our research, media, and policy strategy among other things. I’ve been interviewed in the past (https://80000hours.org/2013/02/bringing-it-all-together-high-impact-research-management/) about the importance of roles of this nature; right now I see it as our biggest bottleneck, and a position in which an ambitious person could make a huge difference.

We need your help – again!

In some ways, CSER has been the quietest of the existential risk organisations of late – we’ve mainly been establishing research connections, running lectures and seminars, writing research grants and building relations with policymakers (plus some behind-the scenes involvement with various projects). But we’ve been quite successful in these things, and now face an exciting but daunting level of growth: by next year we aim to have a team of 9-10 postdoctoral researchers here at Cambridge, plus senior professors and other staff. It’s very important we continue our momentum by getting world-class researchers motivated to do work of the highest impact. Reaching out and finding these people is quite a challenge, especially given our still-small team. So the help of the existential risk and EA communities in spreading the word – on your facebook feeds, on relevant mailing lists in your universities, passing them on to talented people you know – will make a huge difference to us.

Thank you so much!

Seán Ó hÉigeartaigh (Executive Director, CSER)

 

“The Centre for the Study of Existential Risk is delighted to announce four new postdoctoral positions for the subprojects below, to begin in January 2016 or as soon as possible afterwards. The research associates will join a growing team of researchers developing a general methodology for the management of extreme technological risk.

Evaluation of extreme technological risk will examine issues such as:

The use and limitations of approaches such as cost-benefit analysis when evaluating extreme technological risk; the importance of mitigating extreme technological risk compared to other global priorities; issues in population ethics as they relate to future generations; challenges associated with evaluating small probabilities of large payoffs; challenges associated with moral and evaluative uncertainty as they relate to the long-term future of humanity. Relevant disciplines include philosophy and economics, although suitable candidates outside these fields are welcomed. More: Evaluation of extreme technological risk

Extreme risk and the culture of science will explore the hypothesis that the culture of science is in some ways ill-adapted to successful long-term management of extreme technological risk, and investigate the option of ‘tweaking’ scientific practice, so as to improve its suitability for this special task. It will examine topics including inductive risk, use and limitations of the precautionary principle, and the case for scientific pluralism and ‘breakout thinking’ where extreme technological risk is concerned. Relevant disciplines include philosophy of science and science and technology studies, although suitable candidates outside these fields are welcomed. More: Extreme risk and the culture of science;

Responsible innovation and extreme technological risk asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. What can be learned from historical examples of technology governance and culture-development? What are the roles of different forms of regulation in the development of transformative technologies with risk potential? Relevant disciplines include science and technology studies, geography, sociology, governance, philosophy of science, plus relevant technological fields (e.g., AI, biotechnology, geoengineering), although suitable candidates outside these fields are welcomed. More: Responsible innovation and extreme technological risk

We are also seeking to appoint an academic project manager, who will play a central role in developing CSER into a world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and administrative responsibilities. The Academic Project Manager will co-ordinate and develop CSER’s projects and the Centre’s overall profile, and build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide. This is a unique opportunity to play a formative research development role in the establishment of a world-class centre. More: CSER Academic Project Manager

Candidates will normally have a PhD in a relevant field or an equivalent level of experience and accomplishment (for example, in a policy, industry, or think tank setting). Application Deadline: Midday (12:00) on November 12th 2015.”

Regular Lesswronger's interviews

3 Elo 13 June 2015 09:12AM

As was commented by Clarity:

Perhaps someone can do a interview to differentiate his mind and rationality from others. For instance, his motivation for different posts or blog posts- or to what extent he consciously optimises his optimisation processes. Gwern, if you feel this is inappropriate and would rather not be put in the spotlight, make your feelings known and I'll won't hassle you further.

Otherwise, for the sake of developing my rationalist skill set and knowledge, I'd like to know more. Elo, would you be so kind as to consider conducting an indepth case study type interview?

(http://lesswrong.com/lw/m9e/a_survey_of_the_top_posters_on_lesswrong/

(http://lesswrong.com/lw/m9e/a_survey_of_the_top_posters_on_lesswrong/cgyo)

I am more than happy to start a small project in interviewing LW'ers of active participation.


With that in mind:

  1. Who would you like to have interviewed? (can be multiples)
  2. What would you ask them? (can be multiples)
    2a. what would generally be good questions for interviews of lesswrongers?
  3. What format would you prefer (will survey in the comments)?

OR:

Would you like to be interviewed?
Do you have a project that you would like to be interviewed about?

Disclosure: I am personally concerned with raising the profile, status and existence of in-person meetups, so I would probably include a few questions about:

  • Where are you from?
  • Which meetups have you visited?
  • What was the most valuable thing you found in a meetup?
Other questions would probably concern http://www.rot13.com :
  • Jung qb lbh qb jvgu lbhe gvzr (gvzr hfr)?
  • Jung qb lbh qb sbe n yvivat (va pnfr gurl ner qvssrerag)?
  • Jung ner lbh jbexvat ba?
  • Jung qb lbh qb sbe sha?
  • Jung zrffntr jbhyq lbh funer jvgu YJ?
  • Jung qb lbh frr bs gur jbeyq va 10 lrnef?

 

I am keen to try to set up one interview every week or two.  (willing to put in 2 hours of effort each week towards contacting people, planing questions, talking to them, interviewing on record, tidying up the audio, then publishing the interview.  Est 15-45min interview)

Moloch: optimisation, "and" vs "or", information, and sacrificial ems

20 Stuart_Armstrong 06 August 2014 03:57PM

Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.

Go read it.

Don't worry, I can wait. I'm only a piece of text, my patience is infinite.

De-dum, de-dum.

You sure you've read it?

Ok, I believe you...

Really.

I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?

Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.

 

Academic Moloch

Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").

The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.

One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.

Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:

Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don't know about the pesticide, and the government hasn't caught up to regulating it yet. Now there's a tiny uncoupling between "selling to [customers]" and "satisfying [customers'] values", and so of course [customers'] values get thrown under the bus.

This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.

Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.

 

The point of the post

The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.

Publication: the "anti-science" trope is culturally polarizing and makes people distrust scientists

13 ancientcampus 07 February 2014 05:09PM

Paper by the Cultural Cognition Project: The culturally polarizing effect of the "anti-science trope" on vaccine risk perceptions

This is a great paper (indeed, I think many at LW would find the whole site enjoyable). I'll try to summarize it here.

Background: The pro/anti vaccine debate has been hot recently. Many pro-vaccine people often say, "The science is strong, the benefits are obvious, the risks are negligible; if you're anti-vaccine then you're anti-science".

Methods: They showed experimental subjects an article basically saying the above.

Results: When reading such an article, a large number of people did not trust vaccines more, but rather, trusted the American Academy of Pediatrics less.

 

My thoughts: I will strive to avoid labeling anybody as being "anti-science" or "simply or willfully ignorant of current research", etc., even when speaking of hypothetical 3rd parties on my facebook wall. This holds for evolution, global warming, vaccines, etc.

///

Also included in the article: references to other research that shows that evolution and global warming debates have already polarized people into distrusting scientists, and evidence that people are not yet polarized over the vaccine issue.

If you intend to read the article yourself: I found it difficult to understand how the authors divided participants into the 4 quadrants (α, ß, etc.) I will quote my friend, who explained it for me:

I was helped by following the link to where they first introduce that model.

The people in the top left (α) worry about risks to public safety, such as global warming. The people in the bottom right (δ) worry about socially deviant behaviors, such as could be caused by the legalization of marijuana.

People in the top right (β) worry about both public safety risks and deviant behaviors, and people in the bottom left (γ) don't really worry about either.

[Link] Immortality Project

-4 [deleted] 20 March 2013 08:18AM

An interesting article on the Immortality Project at UC Riverside. This is the website.

This seems like something for LWers to look into - they're offering grants and essay prizes.

The Fiction Genome Project

12 [deleted] 29 June 2012 11:19AM

The Music Genome Project is what powers Pandora. According to Wikipedia:

 

The Music Genome Project was first conceived by Will Glaser and Tim Westergren in late 1999. In January 2000, they joined forces with Jon Kraft to found Pandora Media to bring their idea to market.[1] The Music Genome Project was an effort to "capture the essence of music at the fundamental level" using almost 400 attributes to describe songs and a complex mathematical algorithm to organize them. Under the direction of Nolan Gasser, the musical structure and implementation of the Music Genome Project, made up of 5 Genomes (Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical), was advanced and codified.

 

A given song is represented by a vector (a list of attributes) containing approximately 400 "genes" (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical music, have 300–500 genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 1 and 5, in half-integer increments.[2]

 

Given the vector of one or more songs, a list of other similar songs is constructed using a distance function. Each song is analyzed by a musician in a process that takes 20 to 30 minutes per song.[3] Ten percent of songs are analyzed by more than one technician to ensure conformity with the in-house standards and statistical reliability. The technology is currently used by Pandora to play music for Internet users based on their preferences. Because of licensing restrictions, Pandora is available only to users whose location is reported to be in the USA by Pandora's geolocation software.[4]

 

 

Eminent lesswronger, strategist, and blogger, Sebastian Marshall,  wonders:

 

Personally, I was thinking of doing a sort of “DNA analysis” of successful writing. Have you heard of the Music Genome Project? It powers Pandora.com.

 

So I was thinking, you could probably do something like that for writing, and then try to craft a written work with elements known to appeal to people. For instance, if you wished to write a best selling detective novel, you might do an analysis of when the antagonist(s) appear in the plot for the first time. You might find that 15% of bestsellers open with the primary antagonist committing their crime, 10% have the antagonist mixed in quickly into the plot, and 75% keep the primary antagonist a vague and shadowy figure until shortly before the climax.

 

I don’t know if the pattern fits that – I don’t read many detective novels – but it would be a bit of a surprise if it did. You might think, well, hey, I better either introduce the antagonist right away having them commit their crime, or keep him shadowy for a while.

 

 

Or, to use an easier example – perhaps you could wholesale adopt the use of engineering checklists into your chosen discipline? It seems to me like lots of fields don’t use checklists that could benefit tremendously from them. I run this through my mind again and again – what kind of checklist could be built here? I first came across the concept of checklists being adopted in surgery from engineering, and then having surgical accidents and mistakes go way down.

 

Some people at TV Tropes came across that article, and thought that their wiki's database might be a good starting point to make this project a reality. I came here to look for the savvy, intelligence, and level of technical expertise in all things AI and NIT that I've come to expect of this site's user-base, hoping that some of you might be interested in having a look at the discussion, and, perhaps, would feel like joining in, or at least sharing some good advice.

Thank you. (Also, should I make this post "Discussion" or "Top Level"?)

"Ask for help on your project" open thread

9 Emile 06 February 2012 09:59PM

Quite a few of us are working on interesting projects; many of those are solo, but some could maybe use some help. So here's the place to ask!

Topic Search Poll Results and Short Reports

6 Nic_Smith 09 August 2011 06:28AM

At the end of June, I asked Less Wrong to vote for "What topic[s] would be best for an investigation and brief post?" in order to direct a search for topics to examine here. My thanks to everyone that participated (especially since the comments hint that the poll format was not well-liked). The most-wanted topics follow, and the complete list can be found on Google Docs -- maps and graphs related to the poll are also available on All Our Ideas. A score for a topic in the results below is an "estimated [percent] chance that it will win against a randomly chosen idea."

  1. Systems theory -- 71.6
  2. Leadership -- 70.7
  3. Linguistics (general) -- 70.7
  4. Finance -- 67.0
  5. Bayesian approach to business -- 60.7
  6. Lisp (Programming language) -- 59.7
  7. Anthropology (general) -- 59.4
  8. Sociology (general) -- 59.2
  9. Political Science (general) -- 58.5
  10. Historiography (the methods of history) -- 58.3
  11. Logistics -- 56.8
  12. Sociology of Political Organizations -- 56.0
  13. Military Theory -- 52.1
  14. Diplomacy -- 51.1

Systems theory, in first place, is a topic that I found while rummaging through online sources, including Wikipedia, for items to add to the poll; it's described there as the "study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. [....] In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback." Leadership seems to fall into both the social and "being effective" categories of interest, but has only lightly been touched on in previous discussion here despite a lot of ink spilled on the topic elsewhere -- the top Google results for "leadership" on this site are currently Calcsam's post on community roles and a book review for the Arbinger Institute's Leadership and Self Deception. "To Lead, You Must Stand Up" also comes to mind.

How to Use It

The spreadsheet includes columns for "Currently Investigated By" and "Writeup URLs" -- feel free to add your name or writeup links. If you already know a thing or two about one of the above topics, share your knowledge in a comment below or in a discussion post as appropriate, similar to the earlier "What can you teach us?" If you want to survey what currently exists on a topic, grab a few books, investigate, and then let us know what you found. When a related post instead of just a comment is appropriate, I recommend the tag "topic_search" As mentioned previously, even investigations that end in a comment to this post that a topic isn't useful for LW is still itself useful for the search.

Please vote -- What topic would be best for an investigation and brief post?

4 Nic_Smith 30 June 2011 04:50AM

Followup to: Systematic Search for Useful Ideas

I've set up a pairwise poll for this question and additional suggestions are welcome. My original proposal was to examine topics that haven't already been covered here, but instead of that, I'd like to ask people to consider the existing level of discussion on a topic in evaluating what would be "best."

ETA: There are currently over 500 pairs. You don't have to go through all of them -- answer as many or as few as you like.

[prize] new contest for Spaced Repetition literature review ($365+)

15 jsalvatier 18 June 2011 06:31PM

Update: the prize is now finished!

The previous contest was poorly formatted for eliciting the most useful reviews of the spaced repetition literature so I've created a new slightly different contest. 

I'm interested in making projects happen on Less Wrong. In order to find out what works and to inspire others to try things too, I'm sponsoring the following small project:

Spaced Repetition is often mentioned on Less Wrong as a technique for adding facts to memory. I've started using Anki and it certainly seems to be useful. However, I haven't seen a good summary of evidence on Spaced Repetition and I would like to change that.

I hereby offer a prize, currently $385, to the best literature review submitted by August 1st. 'Best' will be judged by voting with discussion beforehand by the Seattle LW meetup group. People are not allowed to vote for their own submissions.

The summary should address questions such as:

  • What spacing is best?
  • How much does spaced repetition actually help memory?
  • Does spaced repetition have hidden benefits or costs?
  • Does the effectiveness vary across domains? How much? 
  • Is there research on the kinds of questions that work best? Especially for avoiding 'guessing the password' and memorizing the card per se rather than the material.
  • What questions do researchers think are most important?
  • Is there any interesting ongoing research? If so, what is it on?
  • What, if any, questions do researchers think it is important to answer? Are there other unanswered questions that would jump out at a smart person?
  • What does spaced repetition not do that people might expect it to?

The post should summarize the state of current evidence and provide citations to back up the claims in the article. Referencing both academic and non-academic research is encouraged. Lukeprog's The Science of Winning At Life sequence contains several examples of good literature review posts.

If you think you would benefit from the result of this project, please add to the prize! You can contribute to the prize on the ChipIn page.

If you have suggestions, questions or comments, please leave them in the comments. Prizes demotivating? Due date too soon/far? Specification too vague? Judgement procedure not credible enough?

This project is tagged with the 'project' tag and listed on the Projects wiki page.

[prize] Spaced Repetition literature review

17 jsalvatier 07 June 2011 03:28AM

EDIT: I am canceling this contest because I feel that the structure of the incentives were poorly thought out (see gwern's comments). I will be posting a new better structured contest in the near future (now posted). If you feel this is unfair or otherwise feel slighted, please contact me at my username at gmail.com.

 

I'm interested in making projects happen on Less Wrong. In order to find out what works and to inspire others to try things too, I'm sponsoring the following small project:

Spaced Repetition is often mentioned on on Less Wrong as a technique for remembering things. I've started using Anki and it certainly seems to be useful. However, I haven't seen a good summary of evidence on Spaced Repetition. 

I hereby offer a prize to the first person to submit a good summary of the evidence on Spaced Repetition to the main page. The winner will get the prize, currently: $265 + 40 to charity (see comments)

The summary should address at least the following questions:

  • What spacing is best?
  • How much does spaced repetition actually help memory?
  • Does spaced repetition have hidden benefits or costs?
  • Does the effectiveness vary across domains? How much?
  • Is there research on the kinds of questions that work best?
  • What questions do researchers think are most important?
  • Is there any interesting ongoing research? If so, what is it on?
  • What, if any, questions do researchers think it is important to answer? Are there other unanswered questions that would jump out at a smart person?
  • What does spaced repetition not do that people might expect it to?
The post should summarize the state of current evidence and provide citations to back up the claims in the article.
If you think you would benefit from the result of this project, please add to the prize! You can contribute to the prize on the ChipIn page.
Whether the summary is 'good' will be judged by me. If there is a serious dispute, I'll agree to dispute resolution by any uninvolved party with more than 5k karma. 
If you have suggestions, questions or comments, please leave them in the comments. 
If you would like to work on this project, please say so in the comments below. Collaboration is encouraged. 
This project is tagged with the 'project' tag and listed on the  Projects wiki  page.

Proposal: Systematic Search for Useful Ideas

6 Nic_Smith 01 June 2011 12:09AM

LessWrong is a font of good ideas, but the topics and interests usually expressed and explored here tend to cluster over few areas. As such, high-value topics may still be present for the community in other fields which can be systematically explored, rather than waiting for a random encounter. Additionally, there seems to be interest here in examining a wider variety of topics. In order to do this, I suggest creating a community list of areas to look into (besides the usual AI, Cog Sci, Comp Sci, Econ, Math, Philosophy, Psych, Statistics, etc.) and then reading a bit on the basics of these fields. In additional to potentially uncovering useful ideas per se, this also might offer the opportunity to populate the textbooks resource list and engage in not-random acts of scholarship.

Everyone Split Up, There’s a A Lot of Ideosphere to Cover

A rough sketch of how I think the project will work follows. I’ll be proceeding with this and tackling at least one or two subjects as long as there’s at least a few other people interested in working on it too.

Step 1, Community Evaluation: Using All Our Ideas or similar, generate a list of fields to investigate.
Step 2, Sign-Up: People have the best sense of what they already know and their abilities, so at this point anyone that wants to can pick a subject that’s best for them to look into.
Step 3, Study: I imagine this will mostly involve self-directed reading of a handful of texts, watching some online videos, and maybe calling up one or two people -- in other words, nothing too dramatic. If a vein of something interesting is found, it’s probably better that it’s “marked” for further follow-up rather than further examined alone.
Step 4, Post: Some these investigations will not reveal anything -- that’s actually a good thing (explained below); for these, a short “Looked into it, nothing here” sort of comment should suffice. Subjects with bigger findings should get bigger, more detailed comments/posts.

Evaluation of Proposal

As a first step, I’ll use a variation of the Heilmeier questions which is an (admittedly idiosyncratic) mix of the original version and gregv’s enhanced version.

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
    Produce comments or posts providing very brief overviews of fields of knowledge, not previously discussed here, with notes pertaining to Less Wrong topics and interests.
  • Who cares? How many people will benefit?
    This post is partially an attempt to determine that, but there seems to be at least some interest in more variety on the site (see above). Additionally, the posts should be a good general resource for anyone that stumbles across them, and might even make good content for search purposes.
  • Why hasn't someone already solved this problem? What makes you think what stopped them won't stop you?
    The idea is roughly book club meets Wikipedia, but with an emphasis on creating a small evaluative body of knowledge rather than a massive descriptive encyclopedia, and with a LessWrong twist. The sharper focus should make the results more useful to go through than just hitting “random page” in yon encyclopedia.
  • How much have projects like this cost (time equivalent)?
    Some have the ability to take on “whole fields of knowledge in mere weeks” but that’s not typical -- investigating a subject in this case is roughly comparable in complexity to taking an introductory class or two, which people without any previous training normally accomplish over a period of about three to four months at a pace which is not especially strenuous, and with fairly light monetary costs beyond tuition/fees (which aren't applicable here).
  • What are the midterm and final "exams" to check for success?
    For each individual investigation, a good “midterm” check would be for the person looking into a field to have an list of resources or texts they’re working on. The final “exam” is a posting indicating if anything useful or interesting was found, and if so, what.
  • If y [this community search] fails to solve x [uncover useful knowledge in fields previously under-examined on LessWrong], what would that teach you that you (hopefully) didn't know at the beginning?
    Quite possibly, this could be a good thing -- it indicates that the mix of topics on LessWrong is approximately right, and things can continue on. In this case, we’d end up seeing a bunch of short “nothing interesting here” comments, and can rest more or less assured that further investigation into even more minute detail in unnecessarily. This is conditional on not-terrible scholarship and a reasonably good priority list from step 1.

Making projects happen

13 jsalvatier 31 May 2011 03:56PM

Judging by the number of upvotes, Brandon Reinhart's analysis of SIAI's financial filings is valuable to quite a few people. Similar analysis' of Alcor and the Cryonics Institute would be quite valuable. There has been talk of more work on condensing LW content and placing it on the wiki. I'm sure lots of people would like to know about the literature on low dose asprin. People seem to want a front page more accessible to newcomers. Will these projects get accomplished? Some of them, but probably fewer than optimal. I think we can do better. 

I would like to look for ways to channel group willingness to contribute to a project into focused individual willingness to work on a project.

Observations about the problem space

The following is based on discussions at the Seattle Less Wrong meetup.

Many people would get a moderate amount of benefit from such projects, but only a small number would end up putting in the hard work to make them happen. 

The people most enthusiastic about a given project may not be the best people to work on the project. Perhaps they have very time consuming jobs or have a hard time being objective about the topic (e.g. someone who gets especially emotional about Cryonics) or have too many other projects already or perhaps they are intellectually motivated but not emotionally motivated by the project which might make it difficult to Get Things Done. 

Trying to generalize too early is a risk here. Going out and building fancy tools or otherwise trying something elaborate is probably not a good idea at first. Better to try some concrete trials first and learn from those experiences.

Sources of motivation

There are three major potential sources of motivation: Money (the unit of caring), social status (Karma, kind words etc.), things (pizza, books, cookies, pony pictures).

  • Money
    • Transfers of money (the unit of caring) are often much more efficient than transfers of other goods.
    • Extrinsic rewards (especially money) can reduce intrinsic motivation. 
    • Large monetary rewards can also make relationship between the project contributors and the project sponsors less social. 
    • Many Less Wrong people are high paid
      • Less likely to be motivated by small monetary rewards
      • Have more money to contribute to projects. 
      • Not all Less Wrong people are high paid. 
    • There are services for collecting donations (link).
  • Social rewards
    • Praise 
    • Karma
    • Social status
  • Things
  • Social pressure
    • requests
    • progress monitoring

Different motivators may work better for different kinds of projects. For example, money might be a counterproductive motivator for social projects but a great motivator for setting up a website.

How have others tackled this?

This is a problem others face as well. How do other similar groups and communities ameliorate it?

  • Intrinsic motivation
    • Conferring social status on those who do valuable work
  • Sprints: several people get together in a single place and work together on a project for a couple of days.
    • Main draw seems to be Fun
    • Frequently used by Python projects
  • Competition/bounties (McKinsey survey of prize literature)
    • Provides social and/or material rewards
    • Sometimes used on LW (link 1link 2link 3).
    • Work seem well for some larger open source software projects (link 1link 2link 3), though some fail to get off the ground at all.
    • Poorly arranged prizes can induce wasted effort
    • Judging quality can be a serious issue especially when monetary rewards are involved
      • potential for social conflict
      • some people are better at dealing with social conflicts than others
      • pre-designated arbiters more likely to be trusted than others

Miscellaneous observations

  • Working groups or otherwise close contact sometimes increase people's motivations via peer pressure.
  • Personally requesting someone work on a project can increase their motivation to do so.
  • With certain kinds of motivation you often get people agreeing to work on a project and then getting slightly stuck and delaying it indefinitely. (Patri Friedman has given one reason why this might happen)
  • Different incentives might work better/worse for different kinds of projects. 
  • Monitoring project progress could help motivation (it might also have other benefits, such as knowing when to rethink the project or to find another person to work on it).
  • Splitting up a project into a number of small clear tasks that individuals can pick up and complete decreases the costs of working on projects. The very fact of announcing, specifying and taskifying a project can induce interest. 
  • Open projects (Wikipedia, open source projects) are often primarily worked on by a small group of highly dedicated contributors.
  • Want to encourage quality
  • sometimes something is better than nothing 
  • sometimes drafts and large output volume is useful for future work
  • People most interested in the results of a project are not always the people best suited to do the project. 
  • High visibility projects 
  • Increase interest in working on projects 
  • Completed projects give social rewards to completors 
  • Completed projects serve as templates for future related projects
  • Quantifying aggregate interest (both in terms of number and intensity) is useful for deciding what projects are most important 
  • Aggregating what skills potential project contributors have is useful for determining what projects are possible

In the interest of Holding Off On Proposing Solutions, please take a moment to try to identify features of the problem space that I have not mentioned before reading the comments. Please mention any features you notice as well as any potential solutions or parts of solutions in the comments. I have some ideas, and I will propose them in the comments.

Bridging Inferential Gaps and Explaining Rationality to Other People

9 atucker 13 February 2011 06:22AM

This post is going in discussion until I get it edited enough that I feel like its post-worthy, or if it does well.


Core Post:

Rationality has helped me do a lot of things (in the past year: being elected President of my robotics team, getting a girlfriend, writing good college apps (and getting into a bunch of good schools), etc.), and I feel sort of guilty for not helping other people use it.

I had made progress on a lot of those fronts before, but a bunch of things fell into place in a relatively short period of time after I started trying to optimize them. Some of my friends have easyish problems, but unsolicited risky counterintuitive advice is uncouth and unhelpful.

More pressingly, I want to pass on a lot of rationality knowledge to people I know before I graduate high school. Being in a fairly good Math/Science/Computer Science Magnet Program, I have access to a lot of smart, driven people who have a lot of flexibility in their lives and I think it would be a shame if there were things I could tell them that would make them do a lot better. On top of that, I want to pass on this knowledge within my robotics team so that they continue doing well.

Basically, I want to learn how to explain useful rationality concepts to other people in a non-annoying and effective way. As far as I can tell, many people want to do similar things, and find it difficult to do so.

I suspect that this topic is broad enough that it would be hard for a single person to tackle it in one post. So that people don't need to have enough information for an entire post (which, would be awesome by the way) before they talk about it, here's a thread to respond to.

I'd particularly like to encourage people who have successfully bridged inferential distances to reply with where people started and how the conversation went. Please. An example:

In my Origins of Science (basically a philosophy) class, a conversation like this (paraphrased, happened a few days ago) took place. I'm not sure where the other people in the class started, but it got them to the point that they understood how you model reality, but that beliefs are supposed to reflect reality, and you can't just make things up entirely.

W: "I feel like if people want to think God exists, then God exists for them, but if they want to ignore him then he won't."

me: "But that's not how existing works. In our thoughts and opinions, we make a map of how the world exists. But the map is not the territory."

W: "But it will still seem real to you..."

me: "Like, you can put whatever you want in your map like dragons or whatever, but that doesn't actually put dragons in the territory. And now its a failure of your map to reflect the territory, not of the territory to reflect your map"

I could have said the last part better, but I definitely remember saying the last sentence.

The map vs. territory example seems to be really effective, a few people complimented it (and I admitted that I had read it somewhere else). Not sure how much it propagates into other beliefs, I'll update later with how much it seems to affect later conversations in the class.

Questions:
What basic rationality ideas are the most helpful to the most people?

Would it be helpful to try and categorize where people are inferentially? Is it possible?

Observations:

  • Inferential Distance is a big deal. Hence the first part of the title. I was able to explain transhumanism to someone in 3 minutes, and have them totally agree. Other people don't even accept the possibility of AI, let alone that morality can happen when God doesn't exist.
  • Its much easier to convince people who know and like you.
  • There's a difference between getting someone to ostensibly agree with something, and getting it to propagate through their beliefs.
  • People remember rationality best when they benefit from learning it, and it applies to what they're specifically trying to do.
  • It's difficult to give someone specific advice and have them pick up on the thought process that you used to come up with it.
  • Atheists seem to be pretty inferentially close to Singularity-cluster ideas.
  • From an earlier post I got a bunch of helpful feedback, particularly from Nornagest's comment and TheOtherDave. The short versions:
    • Asking people to do specific things is creepy, teaching someone is much more effective if you just tell them the facts and let them do whatever they want with it.
    • People need specifics to actually do something, and its hard to make them decide to do something substantially different than what they already are doing
  • And from a comment by David Gerard: People need to want to learn/do something, its hard to push them into it.
  • A lot of people are already doing useful things (research, building businesses), so it might be more helpful to make a bunch of them better than a few of them do something entirely different.

How would you spend 30 million dollars?

2 MariaKonovalenko 17 November 2010 02:28PM

There's a good song by Eminem - If I had a million dollars.  So, if I had a hypothetical task to give away $30 million to different foundations without having a right to influence the projects, I would distribute them as follows, $3 million for each organization:

1. Nanofactory collaboration, Robert Freitas, Ralph Merkle – developers of molecular nanotechnology and nanomedicine. Robert Freitas is the author of the monography Nanomedicine.
2. Singularity institute, Michael Vassar, Eliezer Yudkowsky – developers and ideologists of the friendly Artificial Intelligence
3. SENS Foundation, Aubrey de Grey – the most active engineering project in life extension, focused on the most promising underfunded areas
4. Cryonics Institute – one of the biggest cryonics firms in the US, they are able to use the additional funding more effectively as compared to Alcor
5. Advanced Neural Biosciences, Aschwin de Wolf – an independent cryonics research center created by ex-researchers from Suspended Animation
6. Brain observatory – brain scanning
7. University Hospital Careggi in Florence, Paolo Macchiarini – growing organs (not an American medical school, because this amount of money won’t make any difference to the leading American centers)
8. Immortality institute – advocating for immortalism, selected experiments
9. IEET – institute of ethics and emerging technologies – promotion of transhumanist ideas
10. Small research grants of $50-300 thousand

Now, if the task is to most effectively invest $30 million dollars, what projects would be chosen? (By effectiveness here I mean increasing the chances of radical life extension)

Well, off the top of my head:

1. The project: “Creation of technologies to grow a human liver” – $7 million. The project itself costs approximately $30-50 million, but $7 million is enough to achieve some significant intermediate results and will definitely attract more funds from potential investors.
2. Break the world record in sustaining viability of a mammalian head separate from the body - $0.7 million
3. Creation of an information system, which characterizes data on changes during aging in humans, integrates biomarkers of aging, and evaluates the role of pharmacological and other interventions in aging processes – $3 million
4. Research in increasing cryoprotectors efficacy - $3 million
5. Creation and realization of a program “Regulation of epigenome” - $5 million
6. Creation, promotion and lobbying of the program on research and fighting aging - $2 million
7. Educational programs in the fields of biogerontology, neuromodelling, regenerative medicine, engineered organs - $1.5 million
8. “Artificial blood” project - $2 million
9. Grants for authors, script writers, and art representatives for creation of pieces promoting transhumanism - $0.5 million
10. SENS Foundation project of removing senescent cells - $2 million
11. Creation of a US-based non-profit, which would protect and lobby the right to live and scientific research in life extension - $2 million
11. Participation of  “H+ managers” in conferences, forums  and social events - $1 million
12. Advocacy and creating content in social media - $0.3 million