Filter This week

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Don't Be Afraid of Asking Personally Important Questions of Less Wrong

33 Evan_Gaensbauer 26 October 2014 08:02AM

Related: LessWrong as a social catalyst

I primarily used my prior user profile asked questions of Less Wrong. When I had an inkling for a query, but I didn't have a fully formed hypothesis, I wouldn't know how to search for answers to questions on the Internet myself, so I asked them on Less Wrong.

The reception I have received has been mostly positive. Here are some examples:

  • Back when I was trying to figure out which college major to pursue, I queried Less Wrong about which one was worth my effort. I followed this up with a discussion about whether it was worthwhile for me to personally, and for someone in general, to pursue graduate studies.


Other student users of Less Wrong benefit from the insight of their careered peers:

  • A friend of mine was considering pursuing medicine to earn to give. In the same vein as my own discussion, I suggested he pose the question to Less Wrong. He didn't feel like it at first, so I posed the query on his behalf. In a few days, he received feedback which returned the conclusion that pursuing medical school through the avenues he was aiming for wasn't his best option relative to his other considerations. He showed up in the thread, and expressed his gratitude. The entirely of the online rationalist community was willing to respond provided valuable information for an important question. It might have taken him lots of time, attention, and effort to look for the answers to this question by himself.

In engaging with Less Wrong, with the rest of you, my experience has been that Less Wrong isn't just useful as an archive of blog posts, but is actively useful as a community of people. As weird as it may seem, you can generate positive externalities that improve the lives of others by merely writing a blog post. This extends to responding in the comments section too. Stupid Questions Threads are a great example of this; you can ask questions about your procedural knowledge gaps without fear of reprisal.  People have gotten great responses about getting more value out of conversations, to being more socially successful, to learning and appreciating music as an adult. Less Wrong may be one of few online communities for which even the comments sections are useful, by default.

For the above examples, even though they weren't the most popular discussions ever started, and likely didn't get as much traffic, it's because of the feedback they received that made them more personally valuable to one individual than several others.

At the CFAR workshop I attended, I was taught two relevant skills:

* Value of Information Calculations: formulating a question well, and performing a Fermi estimate, or back-of-the-envelope question, in an attempt to answer it, generates quantified insight you wouldn't have otherwise anticipated.

* Social Comfort Zone Expansion: humans tend to have a greater aversion to trying new things socially than is maximally effective, and one way of viscerally teaching System 1 this lesson is by trial-and-error of taking small risks. Posting on Less Wrong, especially, e.g., in a special thread, is really a low-risk action. The pang of losing karma can feel real, but losing karma really is a valuable signal that one should try again differently. Also, it's not as bad as failing at taking risks in meatspace.

When I've received downvotes for a comment, I interpret that as useful information, try to model what I did wrong, and thank others for correcting my confused thinking. If you're worried about writing something embarrassing, that's understandable, but realize it's a fact about your untested anticipations, not a fact about everyone else using Less Wrong. There are dozens of brilliant people with valuable insights at the ready, reading Less Wrong for fun, and who like helping us answer our own personal questions. Users shminux and Carl Shulman are exemplars of this.

This isn't an issue for all users, but I feel as if not enough users are taking advantage of the personal value they can get by asking more questions. This post is intended to encourage them. User Gunnar Zarnacke suggested that if enough examples of experiences like this were accrued, it could be transformed into some sort of repository of personal value from Less Wrong

Maybe you want to maximise paperclips too

29 dougclow 30 October 2014 09:40PM

As most LWers will know, Clippy the Paperclip Maximiser is a superintelligence who wants to tile the universe with paperclips. The LessWrong wiki entry for Paperclip Maximizer says that:

The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented

I think that a massively powerful star-faring entity - whether a Friendly AI, a far-future human civilisation, aliens, or whatever - might indeed end up essentially converting huge swathes of matter in to paperclips. Whether a massively powerful star-faring entity is likely to arise is, of course, a separate question. But if it does arise, it could well want to tile the universe with paperclips.

Let me explain.

paperclips

To travel across the stars and achieve whatever noble goals you might have (assuming they scale up), you are going to want energy. A lot of energy. Where do you get it? Well, at interstellar scales, your only options are nuclear fusion or maybe fission.

Iron has the strongest binding energy of any nucleus. If you have elements lighter than iron, you can release energy through nuclear fusion - sticking atoms together to make bigger ones. If you have elements heavier than iron, you can release energy through nuclear fission - splitting atoms apart to make smaller ones. We can do this now for a handful of elements (mostly selected isotopes of uranium, plutonium and hydrogen) but we don’t know how to do this for most of the others - yet. But it looks thermodynamically possible. So if you are a massively powerful and massively clever galaxy-hopping agent, you can extract maximum energy for your purposes by taking up all the non-ferrous matter you can find and turning it in to iron, getting energy through fusion or fission as appropriate.

You leave behind you a cold, dark trail of iron.

That seems a little grim. If you have any aesthetic sense, you might want to make it prettier, to leave an enduring sign of values beyond mere energy acquisition. With careful engineering, it would take only a tiny, tiny amount of extra effort to leave the iron arranged in to beautiful shapes. Curves are nice. What do you call a lump of iron arranged in to an artfully-twisted shape? I think we could reasonably call it a paperclip.

Over time, the amount of space that you’ve visited and harvested for energy will increase, and the amount of space available for your noble goals - or for anyone else’s - will decrease. Gradually but steadily, you are converting the universe in to artfully-twisted pieces of iron. To an onlooker who doesn’t see or understand your noble goals, you will look a lot like you are a paperclip maximiser. In Eliezer’s terms, your desire to do so is an instrumental value, not a terminal value. But - conditional on my wild speculations about energy sources here being correct - it’s what you’ll do.

Wikipedia articles from the future

17 snarles 29 October 2014 12:49PM

Speculation is important for forecasting; it's also fun.  Speculation is usually conveyed in two forms: in the form of an argument, or encapsulated in fiction; each has their advantages, but both tend to be time-consuming.  Presenting speculation in the form of an argument involves researching relevant background and formulating logical arguments.  Presenting speculation in the form of fiction requires world-building and storytelling skills, but it can quickly give the reader an impression of the "big picture" implications of the speculation; this can be more effective at establishing the "emotional plausibility" of the speculation.

I suggest a storytelling medium which can combine attributes of both arguments and fiction, but requires less work than either. That is the "wikipedia article from the future." Fiction written by inexperienced sci-fi writers tends to generate into a speculative encyclopedia anyways--why not just admit that you want to write an encyclopedia in the first place?  Post your "Wikipedia articles from the future" below.

Stupid Questions (10/27/2014)

14 drethelin 27 October 2014 09:27PM

I think it's past time for another Stupid Questions thread, so here we go. 

 

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Please respect people trying to fix any ignorance they might have, rather than mocking that ignorance. 

 

Things to consider when optimizing: Sleep

13 mushroom 28 October 2014 05:26PM

I'd like to have a series of discussion posts, where each post is of the form "Let's brainstorm things you might consider when optimizing X", where X is something like sleep, exercise, commuting, studying, etc. Think of it like a specialized repository.

In the spirit of try more things, the direct benefit is to provide insights like "Oh, I never realized that BLAH is a knob I can fiddle. This gives me an idea of how I might change BLAH given my particular circumstances. I will try this and see what happens!"

The indirect benefit is to practice instrumental rationality using the "toy problem" provided by a general prompt.

Accordingly, participation could be in many forms:

* Pointers to scientific research
* General directions to consider
* Personal experience
* Boring advice
* Intersections with other community ideas, biases
* Cost-benefit, value-of-information analysis
* Related questions
* Other musings, thoughts, speculation, links, theories, etc.

This post is on sleep and circadian rhythms.

[Link] "The Problem With Positive Thinking"

13 CronoDAS 26 October 2014 06:50AM

Psychology researchers discuss their findings in a New York Times op-ed piece.

The take-home advice:

Positive thinking fools our minds into perceiving that we’ve already attained our goal, slackening our readiness to pursue it.

...

What does work better is a hybrid approach that combines positive thinking with “realism.” Here’s how it works. Think of a wish. For a few minutes, imagine the wish coming true, letting your mind wander and drift where it will. Then shift gears. Spend a few more minutes imagining the obstacles that stand in the way of realizing your wish.

This simple process, which my colleagues and I call “mental contrasting,” has produced powerful results in laboratory experiments. When participants have performed mental contrasting with reasonable, potentially attainable wishes, they have come away more energized and achieved better results compared with participants who either positively fantasized or dwelt on the obstacles.

When participants have performed mental contrasting with wishes that are not reasonable or attainable, they have disengaged more from these wishes. Mental contrasting spurs us on when it makes sense to pursue a wish, and lets us abandon wishes more readily when it doesn’t, so that we can go after other, more reasonable ambitions.

[Link]"Neural Turing Machines"

10 Prankster 31 October 2014 08:54AM

The paper.

Discusses the technical aspects of one of Googles AI projects. According to a pcworld the system "apes human memory and programming skills" (this article seems pretty solid, also contains link to the paper). 

The abstract:

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

 

(First post here, feedback on the appropriateness of the post appreciated)

LW Supplement use survey

10 FiftyTwo 28 October 2014 09:28PM

I've put together a very basic survey using google forms inspired by NancyLebovitz recent discussion post on supplement use 

Survey includes options for "other" and "do not use supplements." Results are anonymous and you can view all the results once you have filled it in, or use this link

 

Link to the Survey

Manual for Civilization

8 RyanCarey 31 October 2014 09:36AM
I was wondering how seriously we've considered storing useful information to improve the chance of rebounding from a global catastrophe. I'm sure this has been discussed previously, but not in sufficient depth that I could find it on a short search of the site. If we value future civilisation, then, it may be worth going to significant length to reduce existential risks. Some interventions will target specific risky tech, like AI and synthetic biology. However, just as many of today's risks could not have been identified a century ago, we should expect some emerging risks of the coming decades to also catch us by surprise. As argued by Karim Jebari, even if risks are not identifiable, we can take general-purpose methods to reduce them, by analogy to the principles of robustness and safety factors in engineering. One such idea is, to create a store of the kind of items one would want to recover from catastrophe. This idea varies based on which items are chosen and where they are stored. Nick Beckstead has investigated bunkers, and he basically rejected bunker-improvement because the strength of a bunker would not improve our resilience to known risks like AI, nuclear weapons or biowarfare. However, his analysis was fairly limited in scope. He focused largely on where to put people, food and walls, in order to manage known risks. It would be useful for further analysis to consider where you can put other items, like books, batteries or 3D printers, in an analysis of a range of scenarios that could arise from known or unknown risks. Though we can't currently identify many plausible risks that would leave us without 99% of civilisation, that's still a plausible situation that it's good to equip ourselves to recover from. What information would we store? The Knowledge, How to Rebuild Civilisation From Scratch would be a good candidate based on its title alone, and a quick skim over i09's review. One could bury Wikipedia, the Internet Archive, or a bunch of other items suggested by The Long Now Foundation. A computer with a battery perhaps? Perhaps all of the above, to ward against the possibility that we miscalculate.  Where would we store it? Again, the principle of resilience would seem to dictate that we should store these in a variety of sites. They could be underground and overground, marked and unmarked at busy and deserted sites of varying climate, and with various levels of security. In general, this seems to be neglected, cheap, and unusually valuable, and so I would be interested to hear whether LessWrong has any further ideas about how this could be done well.   Further relevant reading: GCRI paper, Adaptation to and Recovery From Global Catastrophe Svalbard Global Seed Vault, a biodiversity store in the far North of Norway, Antarctica, started by Gates and others.

Link: Elon Musk wants gov't oversight for AI

8 polymathwannabe 28 October 2014 02:15AM

"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."

http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/#ftag=CAD590a51e

Podcasts?

8 Capla 25 October 2014 11:42PM

I discovered podcasts last year, and I love them! Why not be hearing about new ideas while I'm walking to where I'm going? (Some of you might shout "insight porn!", and I think that I largely agree. However, 1) I don't have any particular problem with insight porn and 2) I have frequently been exposed to an idea or been recommenced a book through a podcast, on which I latter followed up, leading to more substantive intellectual growth.)

I wonder if anyone has favorites that they might want to share with me.

I'll start:

Radiolab is, hands down, the best of all the podcasts. This seems universally recognized: I’ve yet to meet anyone who disagrees. Even the people who make other podcasts think that Radiolab is better than their own. This one regularly invokes a profound sense of wonder at the universe and gratitude for being able to appreciate it. If you missed it somehow, you're probably missing out.

The Freakonomics podcast, in my opinion, comes close to Radiolab. All the things that you thought you knew, but didn’t, and all the things you never knew you wanted to know, but do, in typical Freakonomics style. Listening to their podcast is one of the two things that makes me happy.

There’s one other podcast that I consider to be in the same league (and this one you've probably never heard of) : The Memory Palace. 5-10 minute stories form history, it is really well done. It’s all the more impressive because while Radiolab and Freakonomics are both made by professional production teams in radio studies, The Memory Palace is just some guy who makes a podcast.

Those are my three top picks (and they are the only podcasts that I listen to at “normal” speed instead of x1.5 or x2.0, since their audio production is so good).

I discovered Rationally Speaking: Exploring the Borderlands Between Reason and Nonsense recently and I’m loving it. It is my kind of skeptics podcast, investigating topics that are on the fringe but not straight out bunk (I don't need to listen to yet another podcast about how astrology doesn't work). The interplay between the hosts, Massimo (who has a PhD in Philosophy, but also one in Biology, which excuses it) and Julia (who I only just realized is a founder of the CFAR), is great.

I also sometimes enjoy the Cracked podcast. They are comedians, not philosophers or social scientists, and sometimes their lack of expertise shows (especially when they are discussing topics about which I know more than they do), but comedians often have worthwhile insights and I have been intrigued by ideas they introduced me to or gotten books at the library on their recommendation.

To what is everyone else listening?

Edit: On suggestion from several members on LessWrong I've begun listening to Hardcore History and it's companion podcast Common Sense. They're both great. I have a good knowledge of history from my school days (I liked the subject, and I seem to have strong a propensity to retain extraneous  information, particuarally information in narrative form), and Hardcore History episodes are a great refresher course, reviewing that which I'm already familiar, but from a slightly different perspective, yielding new insights and a greater connectivity of history. I think it has almost certainly supplanted the Cracked podcast as number 5 on my list.

Academic papers

5 Capla 30 October 2014 04:53PM

In line with my continuing  self eduction...

What are the most important or personally influential academic papers you've ever read? Which ones are essential (or just good) for an informed person to have read?

Is there any body of research of which you found the original papers much more valuable than than the popularizations or secondary sources (Wikipedia articles, textbook write ups, ect.), for any reason? What was that reason? Does anyone have a good heuristic for when it is important to "go to the source" and when someone else's summation will do? I have theoretical preference for reading the original research, since if I need to evaluate an idea's merit, reading what others in that field read (instead of the simplified versions) seems like a good idea, but it has the downside of being harder and more time-consuming.

I have wondered if the only reason to bother with technical sounding papers that are hard to understand is that you have to read them (or pretend to read them) in order to cite them.

 

Superintelligence 7: Decisive strategic advantage

5 KatjaGrace 28 October 2014 01:01AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the seventh section in the reading guideDecisive strategic advantage. This corresponds to Chapter 5.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 5 (p78-91)


Summary

  1. Question: will a single artificial intelligence project get to 'dictate the future'? (p78)
  2. We can ask, will a project attain a 'decisive strategic advantage' and will they use this to make a 'singleton'?
    1. 'Decisive strategic advantage' = a level of technological and other advantages sufficient for complete world domination (p78)
    2. 'Singleton' = a single global decision-making agency strong enough to solve all major global coordination problems (p78, 83)
  3. A project will get a decisive strategic advantage if there is a big enough gap between its capability and that of other projects. 
  4. A faster takeoff would make this gap bigger. Other factors would too, e.g. diffusion of ideas, regulation or expropriation of winnings, the ease of staying ahead once you are far enough ahead, and AI solutions to loyalty issues (p78-9)
  5. For some historical examples, leading projects have a gap of a few months to a few years with those following them. (p79)
  6. Even if a second project starts taking off before the first is done, the first may emerge decisively advantageous. If we imagine takeoff accelerating, a project that starts out just behind the leading project might still be far inferior when the leading project reaches superintelligence. (p82)
  7. How large would a successful project be? (p83) If the route to superintelligence is not AI, the project probably needs to be big. If it is AI, size is less clear. If lots of insights are accumulated in open resources, and can be put together or finished by a small team, a successful AI project might be quite small (p83).
  8. We should distinguish the size of the group working on the project, and the size of the group that controls the project (p83-4)
  9. If large powers anticipate an intelligence explosion, they may want to monitor those involved and/or take control. (p84)
  10. It might be easy to monitor very large projects, but hard to trace small projects designed to be secret from the outset. (p85)
  11. Authorities may just not notice what's going on, for instance if politically motivated firms and academics fight against their research being seen as dangerous. (p85)
  12. Various considerations suggest a superintelligence with a decisive strategic advantage would be more likely than a human group to use the advantage to form a singleton (p87-89)

Another view

This week, Paul Christiano contributes a guest sub-post on an alternative perspective:

Typically new technologies do not allow small groups to obtain a “decisive strategic advantage”—they usually diffuse throughout the whole world, or perhaps are limited to a single country or coalition during war. This is consistent with intuition: a small group with a technological advantage will still do further research slower than the rest of the world, unless their technological advantage overwhelms their smaller size.

The result is that small groups will be overtaken by big groups. Usually the small group will sell or lease their technology to society at large first, since a technology’s usefulness is proportional to the scale at which it can be deployed. In extreme cases such as war these gains might be offset by the cost of empowering the enemy. But even in this case we expect the dynamics of coalition-formation to increase the scale of technology-sharing until there are at most a handful of competing factions.

So any discussion of why AI will lead to a decisive strategic advantage must necessarily be a discussion of why AI is an unusual technology.

In the case of AI, the main difference Bostrom highlights is the possibility of an abrupt increase in productivity. In order for a small group to obtain such an advantage, their technological lead must correspond to a large productivity improvement. A team with a billion dollar budget would need to secure something like a 10,000-fold increase in productivity in order to outcompete the rest of the world. Such a jump is conceivable, but I consider it unlikely. There are other conceivable mechanisms distinctive to AI; I don’t think any of them have yet been explored in enough depth to be persuasive to a skeptical audience.


Notes

1. Extreme AI capability does not imply strategic advantage. An AI program could be very capable - such that the sum of all instances of that AI worldwide were far superior (in capability, e.g. economic value) to the rest of humanity's joint efforts - and yet the AI could fail to have a decisive strategic advantage, because it may not be a strategic unit. Instances of the AI may be controlled by different parties across society. In fact this is the usual outcome for technological developments.

2. On gaps between the best AI project and the second best AI project (p79) A large gap might develop either because of an abrupt jump in capability or extremely fast progress (which is much like an abrupt jump), or from one project having consistent faster growth than other projects for a time. Consistently faster progress is a bit like a jump, in that there is presumably some particular highly valuable thing that changed at the start of the fast progress. Robin Hanson frames his Foom debate with Eliezer as about whether there are 'architectural' innovations to be made, by which he means innovations which have a large effect (or so I understood from conversation). This seems like much the same question. On this, Robin says:

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

3. What should activists do? Bostrom points out that activists seeking maximum expected impact might wish to focus their planning on high leverage scenarios, where larger players are not paying attention (p86). This is true, but it's worth noting that changing the probability of large players paying attention is also an option for activists, if they think the 'high leverage scenarios' are likely to be much better or worse.

4. Trade. One key question seems to be whether successful projects are likely to sell their products, or hoard them in the hope of soon taking over the world. I doubt this will be a strategic decision they will make - rather it seems that one of these options will be obviously better given the situation, and we are uncertain about which. A lone inventor of writing should probably not have hoarded it for a solitary power grab, even though it could reasonably have seemed like a good candidate for radically speeding up the process of self-improvement.

5. Disagreement. Note that though few people believe that a single AI project will get to dictate the future, this is often because they disagree with things in the previous chapter - e.g. that a single AI project will plausibly become more capable than the world in the space of less than a month.

6. How big is the AI project? Bostrom distinguishes between the size of the effort to make AI and the size of the group ultimately controlling its decisions. Note that the people making decisions for the AI project may also not be the people making decisions for the AI - i.e. the agents that emerge. For instance, the AI making company might sell versions of their AI to a range of organizations, modified for their particular goals. While in some sense their AI has taken over the world, the actual agents are acting on behalf of much of society.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

 

  1. When has anyone gained a 'decisive strategic advantage' at a smaller scale than the world? Can we learn anything interesting about what characteristics a project would need to have such an advantage with respect to the world?
  2. How scalable is innovative project secrecy? Examine past cases: Manhattan project, Bletchly park, Bitcoin, Anonymous, Stuxnet, Skunk Works, Phantom Works, Google X.
  3. How large are the gaps in development time between modern software projects? What dictates this? (e.g. is there diffusion of ideas from engineers talking to each other? From people changing organizations? Do people get far enough ahead that it is hard to follow them?)

 

If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about Cognitive superpowers (section 8). To prepare, read Chapter 6The discussion will go live at 6pm Pacific time next Monday 3 November. Sign up to be notified here.

Open thread, Oct. 27 - Nov. 2, 2014

5 MrMind 27 October 2014 08:58AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Cross-temporal dependency, value bounds and superintelligence

4 joaolkf 28 October 2014 03:26PM

In this short post I will attempt to put forth some potential concerns that should be relevant when developing superintelligences, if certain meta-ethical effects exist. I do not claim they exist, only that it might be worth looking for them since their existence would mean some currently irrelevant concerns are in fact relevant. 

 

These meta-ethical effects would be a certain kind of cross-temporal dependency on moral value. First, let me explain what I mean by cross-temporal dependency. If value is cross-temporal dependent it means that value at t2 could be affected by t1, independently of any causal role t1 has on t2. The same event X at t2 could have more or less moral value depending onwhether Z or Y happened at t1. For instance, this could be the case on matters of survival. If we kill someone and replace her with a slightly more valuable person some would argue there was a lost rather than a gain of moral value; whereas if a new person with moral value equal to the difference of the previous two is created where there was none, most would consider an absolute gain. Furthermore, some might consider small, gradual and continual improvements are better than abrupt and big ones. For example, a person that forms an intention and a careful detailed plan to become better, and forceful self-wrought to be better could acquire more value than a person that simply happens to take a pill and instantly becomes a better person - even if they become that exact same person. This is not because effort is intrinsically valuable, but because of personal continuity. There are more intentions, deliberations and desires connecting the two time-slices of the person who changed through effort than there are connecting the two time-slices of the person who changed by taking a pill. Even though both persons become equally morally valuable in isolated terms, they do so from different paths that differently affects their final value.

More examples. You live now in t1. If suddenly in t2 you were replaced by an alien individual with the same amount of value as you would otherwise have in t2, then t2 may not have the exact same amount of value as it would otherwise have, simply in virtue of the fact that in t1 you were alive and the alien's previous time slice was not. 365 individuals with a 1 day life do not amount for the same value as a single individual living through 365 days. Slice history in 1 day periods, each day the universe contains one unique advanced civilization with the same overall total moral value, each civilization being completely alien and ineffable to another, each civilization only lives for one day, and then it would be gone forever. This universe does not seem to hold the same moral value as the one where only one of those civilizations flourishes for eternity. On all these examples the value of a period of time seems to be affected by the existence or not of certain events at other periods. They indicate that there is, at least, some cross-temporal dependency.

 

Now consider another type of effect, bounds on value. There could be a physical bound – transfinite or not - on the total amount of moral value that can be present per instant. For instance, if moral value rests mainly on sentient well-being, which can be categorized as a particular kind of computation, and there is a bound on the total amount of such computation which can be performed per instant, then there is a bound on the amount of value per instant. If, arguably, we are currently extremely far from such bound, and this bound will eventually be reached by a superintelligence (or any other structure), then the total moral value of the universe would be dominated by the value of this physical bound, given that regions where the physical bound wasn't reached would make negligible contributions. How much faster the bound can be reached, also how much more negligible pre-bound values are.

 

Finally, if there is a form of value cross-temporal dependence where preceding events leading to a superintelligence could alter the value of this physical bound, then we not only ought to make sure we safely construct a superintelligence, but also that we do so following the path that maximizes such bound. It might be the case that an overly abrupt superintelligence would decrease such bound, thus all future moral value would be diminished by the fact there was a huge discontinuity in the past in the events leading to this future. Even small decreases on such bound would have dramatic effects. Although I do not know of any plausible cross-temporal effect of such kind, it seems this question deserves at least a minimal amount of though. Both cross-temporal dependency and bounds on value seem plausible (in fact I believe some form of them are true), so it is not at all prima facie inconceivable that we could have cross-temporal effects changing the bound up or down.

Donation Discussion - alternatives to the Against Malaria Foundation

4 ancientcampus 28 October 2014 03:00AM

About a year and a half ago, I made a donation to the Against Malaria Foundation. This was during jkaufman's generous matching offer.

That was 20 months ago, and my money is still in the "underwriting" phase - funding projects that are still, of yet, just plans and no nets.

Now, the AMF has had a reasonable reason it was taking longer than expected:

"A provisional, large distribution in a province of the [Democratic Republic of the Congo] will not proceed as the distribution agent was unable to agree to the process requested by AMF during the timeframe needed by our co-funding partner."

So they've hit a snag, the earlier project fell through, and they are only now allocating my money to a new project. Don't get me wrong, I am very glad they are telling me where my money is going, and especially glad it didn't just end up in someone's pocket instead. With that said, though, I still must come to this conclusion:

The AMF seems to have more money than they can use, right now.

So, LW, I have the following questions:

  1. Is this a problem? Should one give their funds to another charity for the time being?
  2. Regardless of your answer to the above, are there any recommendations for other transparent, efficient charities? [other than MIRI]

Weekly LW Meetups

3 FrankAdamek 31 October 2014 07:50PM

This summary was posted to LW Main on October 24th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Why is the A-Theory of Time Attractive?

1 Tyrrell_McAllister 31 October 2014 11:11PM

I've always been puzzled by why so many people have such strong intuitions about whether the A-theory or the B-theory of time is true.  It seems like nothing psychologically important turns on this question.  And yet, people often have a very strong intuition supporting one theory over the other.  Moreover, this intuition seems to be remarkably primitive.  That is, whichever theory you prefer, you probably felt an immediate affinity for that conception of time as soon as you started thinking about time at all.  The intuition that time is A-theoretic or B-theoretic seems pre-philosophical, whichever intuition you have.  This intuition will then shape your subsequent theoretical speculations about time, rather than vice-verse.

Consider, by way of contrast, intuitions about God.  People often have a strong pre-theoretical intuition about whether God exists.  But it is easy to imagine how someone could form a strong emotional attachment to the existence of God early in life.  Can emotional significance explain why people have deeply felt intuitions about time?  It seems like the nature of time should be emotionally neutral1.

Now, strong intuitions about emotionally neutral topics aren't so uncommon.  For example, we have strong intuitions about how addition behaves for large integers.  But usually, it seems, such intuitions are nearly unanimous and can be attributed to our common biological or cultural heritage.  Strong disagreeing intuitions about neutral topics seem rarer.

Speaking for myself, the B-theory has always seemed just obviously true.  I can't really make coherent sense out of the A-theory.  If I had never encountered the A-theory, the idea that time might work like that would not have occurred to me.  Nonetheless, at the risk of being rude, I am going to speculate about how A-theorists got that way.  (B-theorists, of course, just follow the evidence ;).)

I wonder if the real psycho-philosophical root of the A-theory is the following. If you feel strongly committed to the A-theory, maybe you are being pushed into that position by two conflicting intuitions about your own personal identity.

Intuition 1: On the one hand, you have a notion of personal identity according to which you are just whatever is accessible to your self-awareness right now, plus maybe whatever metaphysical "supporting machinery" allows you to have this kind of self-awareness.

Intuition 2: On the other hand, you feel that you must identify yourself, in some sense, with you-tomorrow.  Otherwise, you can give no "rational" account of the particular way in which you care about and feel responsible for this particular tomorrow-person, as opposed to Brittany-Spears-tomorrow, say.

But now you have a problem.  It seems that if you take this second intuition seriously, then the first intuition implies that the experiences of you-tomorrow should be accessible to you-now.  Obviously, this is not the case.  You-tomorrow will have some particular contents of self-awareness, but those contents aren't accessible to you-now.  Indeed, entirely different contents completely fill your awareness now — contents which will not be accessible in this direct and immediate way to you-tomorrow.

So, to hold onto both intuitions, you must somehow block the inference made in the previous paragraph.  One way to do this is to go through the following sequence:

  1. Take the first intuition on board without reservation.
  2. Take the second intuition on board in a modified way: "identify" you-now with you-tomorrow, but don't stop there.  If you left things at this point, the relationship of "identity" would entail a conduit through which all of your tomorrow-awareness should explode into your now, overlaying or crowding out your now-awareness.  You must somehow forestall this inference, so...
  3. Deny that you-tomorrow exists!  At least, deny that it exists in the full sense of the word.  Thus, metaphorically, you put up a "veil of nonexistence" between you-tomorrow and you-now.  This veil of nonexistence explains the absence of the tomorrow-awareness from your present awareness. The tomorrow-awareness is absent because it simply doesn't exist!  (—yet!)  Thus, in step (2), you may safely identify you-now with you-tomorrow.  You can go ahead and open that conduit to the future, without any fear of what would pour through into the now, because there simply is nothing on the other side.

One potential problem with this psychological explanation is that it doesn't explain the significance of "becoming".  Some A-theorists report that a particular basic experience of "becoming" is the immediate reason for their attachment to the A-theory.  But the story above doesn't really have anything to do with "becoming", at least not obviously.  (This is because I can't make heads or tails of "becoming".)

Second, intuitions about time, even in their primitive pre-reflective state, are intuitions about everything in time.  Yet the story above is exclusively about oneself in time.  It seems that it would require something more to pass from intuitions about oneself in time to intuitions about how the entire universe is in time.


1 Some people do seem to be attached to the A-theory because they think that the B-theory takes away their free will by implying that what they will choose is already the case right now.  This might explain the emotional significance of the A-theory of time for some people.  But many A-theorists are happy to grant, say, that God already knows what they will do.  I'm trying to understand those A-theorists who aren't bothered by the implications of the B-theory for free will.

Link: Open-source programmable, 3D printable robot for anyone to experiment with

1 polymathwannabe 29 October 2014 02:21PM

Its name is Poppy.

"Both hardware and software are open source. There is not one single Poppy humanoid robot but as many as there are users. This makes it very attractive as it has grown from a purely technological tool to a real social platform."

vaccination research/reading

0 freyley 27 October 2014 05:20PM

Vaccination is probably one of the hardest topics to have a rational discussion about. I have some reason to believe that the author of http://whyarethingsthisway.com/2014/10/23/the-cdc-and-cargo-cult-science/ is someone interested in looking for the truth, not winning a side - at the very least, I'd like to help him when he says this:

I genuinely don’t want to do Cargo Cult Science so if anybody reading this knows of any citations to studies looking at the long term effects of vaccines and finding them benign or beneficial, please, be sure to post them in the comments.

 

I'm getting started on reading the actual papers, but I'm hoping this finds someone who's already done the work and wants to go post it on his site, or if not, someone else who's interested in looking through papers with me - I do better at this kind of work with social support.