Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Inefficient Games

7 capybaralet 23 August 2016 05:47PM

There are several well-known games in which the pareto optima and Nash equilibria are disjoint sets.
The most famous is probably the prisoner's dilemma.  Races to the bottom or tragedies of the commons typically have this feature as well.

I proposed calling these inefficient games.  More generally, games where the sets of pareto optima and Nash equilibria are distinct (but not disjoint), such as a stag hunt could be called potentially inefficient games.

It seems worthwhile to study (potentially) inefficient games as a class and see what can be discovered about them, but I don't know of any such work (pointers welcome!)

The map of the risks of aliens

4 turchin 22 August 2016 07:05PM

Stephen Hawking famously said that aliens are one of the main risks to human existence. In this map I will try to show all rational ways how aliens could result in human extinction. Paradoxically, even if aliens don’t exist, we may be even in bigger danger.

 

1.No aliens exist in our past light cone

1a. Great Filter is behind us. So Rare Earth is true. There are natural forces in our universe which are against life on Earth, but we don’t know if they are still active. We strongly underestimate such forces because of anthropic shadow. Such still active forces could be: gamma-ray bursts (and other types of cosmic explosions like magnitars), the instability of Earth’s atmosphere,  the frequency of large scale volcanism and asteroid impacts. We may also underestimate the fragility of our environment in its sensitivity to small human influences, like global warming becoming runaway global warming.

1b. Great filter is ahead of us (and it is not UFAI). Katja Grace shows that this is a much more probable solution to the Fermi paradox because of one particular version of the Doomsday argument, SIA. All technological civilizations go extinct before they become interstellar supercivilizations, that is in something like the next century on the scale of Earth’s timeline. This is in accordance with our observation that new technologies create stronger and stronger means of destruction which are available to smaller groups of people, and this process is exponential. So all civilizations terminate themselves before they can create AI, or their AI is unstable and self terminates too (I have explained elsewhere why this could happen ). 

 

2.      Aliens still exist in our light cone.

a)      They exist in the form of a UFAI explosion wave, which is travelling through space at the speed of light. EY thinks that this will be a natural outcome of evolution of AI. We can’t see the wave by definition, and we can find ourselves only in the regions of the Universe, which it hasn’t yet reached. If we create our own wave of AI, which is capable of conquering a big part of the Galaxy, we may be safe from alien wave of AI. Such a wave could be started very far away but sooner or later it would reach us. Anthropic shadow distorts our calculations about its probability.

b)      SETI-attack. Aliens exist very far away from us, so they can’t reach us physically (yet) but are able to send information. Here the risk of a SETI-attack exists, i.e. aliens will send us a description of a computer and a program, which is AI, and this will convert the Earth into another sending outpost. Such messages should dominate between all SETI messages. As we get stronger and stronger radio telescopes and other instruments, we have more and more chances of finding messages from them.

c)      Aliens are near (several hundred light years), and know about the Earth, so they have already sent physical space ships (or other weapons) to us, as they have found signs of our technological development and don’t want to have enemies in their neighborhood. They could send near–speed-of-light projectiles or beams of particles on an exact collision course with Earth, but this seems improbable, because if they are so near, why haven’t they didn’t reached Earth yet?

d)      Aliens are here. Alien nanobots could be in my room now, and there is no way I could detect them. But sooner or later developing human technologies will be able to find them, which will result in some form of confrontation. If there are aliens here, they could be in “Berserker” mode, i.e. they wait until humanity reaches some unknown threshold and then attack. Aliens may be actively participating in Earth’s progress, like “progressors”, but the main problem is that their understanding of a positive outcome may be not aligned with our own values (like the problem of FAI).

e)      Deadly remains and alien zombies. Aliens have suffered some kind of existential catastrophe, and its consequences will affect us. If they created vacuum phase transition during accelerator experiments, it could reach us at the speed of light without warning. If they created self-replicating non sentient nanobots (grey goo), it could travel as interstellar stardust and convert all solid matter in nanobots, so we could encounter such a grey goo wave in space. If they created at least one von Neumann probe, with narrow AI, it still could conquer the Universe and be dangerous to Earthlings. If their AI crashed it could have semi-intelligent remnants with a random and crazy goal system, which roams the Universe. (But they will probably evolve in the colonization wave of von Neumann probes anyway.) If we find their planet or artifacts they still could carry dangerous tech like dormant AI programs, nanobots or bacteria. (Vernor Vinge had this idea as the starting point of the plot in his novel “Fire Upon the Deep”)

f)       We could attract the attention of aliens by METI. Sending signals to stars in order to initiate communication we could tell potentially hostile aliens our position in space. Some people advocate for it like Zaitsev, others are strongly opposed. The risks of METI are smaller than SETI in my opinion, as our radiosignals can only reach the nearest hundreds of light years before we create our own strong AI. So we will be able repulse the most plausible ways of space aggression, but using SETI we able to receive signals from much further distances, perhaps as much as one billion light years, if aliens convert their entire home galaxy to a large screen, where they draw a static picture, using individual stars as pixels. They will use vN probes and complex algorithms to draw such picture, and I estimate that it could present messages as large as 1 Gb and will visible by half of the Universe. So SETI is exposed to a much larger part of the Universe (perhaps as much as 10 to the power of 10 more times the number of stars), and also the danger of SETI is immediate, not in a hundred years from now.

g)      Space war. During future space exploration humanity may encounter aliens in the Galaxy which are at the same level of development and it may result in classical star wars.

h)      They will not help us. They are here or nearby, but have decided not to help us in x-risks prevention, or not to broadcast (if they are far) information about most the important x-risks via SETI and about proven ways of preventing them. So they are not altruistic enough to save us from x-risks.

 

3. If we are in a simulation, then the owners of the simulations are aliens for us and they could switch the simulation off. Slow switch-off is possible and in some conditions it will be the main observable way of switch-off. 

 

4. False beliefs in aliens may result in incorrect decisions. Ronald Reagan saw something which he thought was a UFO (it was not) and he also had early onset Alzheimer’s, which may be one of the reasons he invested a lot into the creation of SDI, which also provoked a stronger confrontation with the USSR. (BTW, it is only my conjecture, but I use it as illustration how false believes may result in wrong decisions.)

 

5. Prevention of the x-risks using aliens:

1.      Strange strategy. If all rational straightforward strategies to prevent extinction have failed, as implied by one interpretation of the Fermi paradox, we should try a random strategy.

2.      Resurrection by aliens. We could preserve some information about humanity hoping that aliens will resurrect us, or they could return us to life using our remains on Earth. Voyagers already have such information, and they and other satellites may have occasional samples of human DNA. Radio signals from Earth also carry a lot of information.

3.      Request for help. We could send radio messages with a request for help. (Very skeptical about this, it is only a gesture of despair, if they are not already hiding in the solar system)

4.      Get advice via SETI. We could find advice on how to prevent x-risks in alien messages received via SETI.

5.      They are ready to save us. Perhaps they are here and will act to save us, if the situation develops into something really bad.

6.      We are the risk.  We will spread through the universe and colonize other planets, preventing the existence of many alien civilizations, or change their potential and perspectives permanently. So we will be the existential risk for them.

 

6. We are the risks for future aleins.

In total, there is several significant probability things, mostly connected with Fermi paradox solutions. No matter where is Great filter, we are at risk. If we had passed it, we live in fragile universe, but most probable conclusion is that Great Filter is very soon.

Another important thing is risks of passive SETI, which is most plausible way we could encounter aliens in near–term future.

Also there are important risks that we are in simulation, but that it is created not by our possible ancestors, but by aliens, who may have much less compassion to us (or by UFAI). In the last case the simulation be modeling unpleasant future, including large scale catastrophes and human sufferings.

The pdf is here

 

 

Open Thread, Aug. 22 - 28, 2016

3 polymathwannabe 22 August 2016 04:24PM

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Willpower Schedule

3 SquirrelInHell 22 August 2016 01:05PM

 


TL;DR: your level of willpower depends on how much willpower you expect to need (hypothesis)


 

Time start: 21:44:55 (this is my third exercise in speed writing a LW post)

I.

There is a lot of controversy about how our level of willpower is affected by various factors, including doing "exhausting" tasks before, as well as being told that willpower is a resource that depletes easily, or doesn't etc.

(sorry, I can't go look for references - that would break the speedwriting exercise!)

I am not going to repeat the discussions that already cover those topics; however, I have a new tentative model which (I think) fits the existing data very well, is easy to test, and supersedes all previous models that I have seen.

II.

The idea is very simple, but before I explain it, let me give a similar example from a different aspect of our lives. The example is going to be concerned with, uh, poo.

Have you ever noticed that (if you have a sufficiently regular lifestyle), conveniently you always feel that you need to go to the toilet at times when it's possible to do so? Like for example, how often do you need to go when you are on a bus, versus at home or work?

The function of your bowels is regulated by reading subconscious signals about your situation - e.g. if you are stressed, you might become constipated. But it is not only that - there is a way in which it responds to your routines, and what you are planning to do, not just the things that are already affecting you.

Have you ever had the experience of a background thought popping up in your mind that you might need to go within the next few hours, but the time was not convenient, so you told that thought to hold it a little bit more? And then it did just that?

III.

The example from the previous section, though possibly quite POOrly choosen (sorry, I couldn't resist), shows something important.

Our subconscious reactions and "settings" of our bodies can interact with our conscious plans in a "smart" way. That is, they do not have to wait to see the effects of what you are doing, to adjust to it - they can pull information from your conscious plans and adjust *before*.

And this is, more or less, the insight that I have added to my current working theory of willpower. It is not very complicated, but perhaps non-obvious. Sufficiently non-obvious that I don't think anyone has suggested it before, even after seeing experimental results that match this excellently.

IV.

To be more accurate, I claim that how much willpower you will have depends on several important factors, such as your energy and mood, but it also depends on how much willpower you expect to need.

For example, if you plan to have a "rest day" and not do any serious work, you might find that you are much less *able* to do work on that day than usual.

It's easy enough to test - so instead of arguing this theoretically, please do just that - give it a test. And make sure to record your levels of willpower several times a day for some time - you'll get some useful data!

 

Time end: 20:00:53. Statistics: 534 words, 2924 characters, 15.97 minutes, 33.4 wpm, 183.1 cpm

DARPA accepting proposals for explainable AI

3 morganism 22 August 2016 12:05AM

"The XAI program will focus the development of multiple systems on addressing challenges problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions."

"At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems. After the program is complete, these toolkits would be available for further refinement and transition into defense or commercial applications"

 

http://www.darpa.mil/program/explainable-artificial-intelligence

Corrigibility through stratified indifference

4 Stuart_Armstrong 19 August 2016 04:11PM

A putative new idea for AI control; index here.

Corrigibility through indifference has a few problems. One of them is that the AI is indifferent between the world in which humans change its utility to v, and world in which humans try to change its utility, but fail.

Now the try-but-fail world is going to be somewhat odd - humans will be reacting by trying to change the utility again, trying to shut the AI down, panicking that a tiny probability event has happened, and so on.

continue reading »

Weekly LW Meetups

1 FrankAdamek 19 August 2016 03:40PM

Deepmind Plans for Rat-Level AI

14 moridinamael 18 August 2016 04:26PM

Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.

I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.

If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.

The Future of Humanity Institute is hiring!

10 crmflynn 18 August 2016 01:09PM

FHI is accepting applications for a two-year position as a full-time Research Project Manager. Responsibilities will include coordinating, monitoring, and developing FHI’s activities, seeking funding, organizing workshops and conferences, and effectively communicating FHI’s research. The Research Program Manager will also be expected to work in collaboration with Professor Nick Bostrom, and other researchers, to advance their research agendas, and will additionally be expected to produce reports for government, industry, and other relevant organizations. 

Applicants will be familiar with existing research and literature in the field and have excellent communication skills, including the ability to write for publication. He or she will have experience of independently managing a research project and of contributing to large policy-relevant reports. Previous professional experience working for non-profit organisations, experience with effectiv altruism, and a network in the relevant fields associated with existential risk may be an advantage, but are not essential. 

To apply please go to https://www.recruit.ox.ac.uk and enter vacancy #124775 (it is also possible to find the job by searching choosing “Philosophy Faculty” from the department options). The deadline is noon UK time on 29 August. To stay up to date on job opportunities at the Future of Humanity Institute, please sign up for updates on our vacancies newsletter at https://www.fhi.ox.ac.uk/vacancies/.

Superintelligence via whole brain emulation

8 AlexMennen 17 August 2016 04:11AM

Most planning around AI risk seems to start from the premise that superintelligence will come from de novo AGI before whole brain emulation becomes possible. I haven't seen any analysis that assumes both uploads-first and the AI FOOM thesis, a deficiency that I'll try to get a start on correcting in this post.

It is likely possible to use evolutionary algorithms to efficiently modify uploaded brains. If so, uploads would likely be able to set off an intelligence explosion by running evolutionary algorithms on themselves, selecting for something like higher general intelligence.

Since brains are poorly understood, it would likely be very difficult to select for higher intelligence without causing significant value drift. Thus, setting off an intelligence explosion in that way would probably produce unfriendly AI if done carelessly. On the other hand, at some point, the modified upload would reach a point where it is capable of figuring out how to improve itself without causing a significant amount of further value drift, and it may be possible to reach that point before too much value drift had already taken place. The expected amount of value drift can be decreased by having long generations between iterations of the evolutionary algorithm, to give the improved brains more time to figure out how to modify the evolutionary algorithm to minimize further value drift.

Another possibility is that such an evolutionary algorithm could be used to create brains that are smarter than humans but not by very much, and hopefully with values not too divergent from ours, who would then stop using the evolutionary algorithm and start using their intellects to research de novo Friendly AI, if that ends up looking easier than continuing to run the evolutionary algorithm without too much further value drift.

The strategies of using slow iterations of the evolutionary algorithm, or stopping it after not too long, require coordination among everyone capable of making such modifications to uploads. Thus, it seems safer for whole brain emulation technology to be either heavily regulated or owned by a monopoly, rather than being widely available and unregulated. This closely parallels the AI openness debate, and I'd expect people more concerned with bad actors relative to accidents to disagree.

With de novo artificial superintelligence, the overwhelmingly most likely outcomes are the optimal achievable outcome (if we manage to align its goals with ours) and extinction (if we don't). But uploads start out with human values, and when creating a superintelligence by modifying uploads, the goal would be to not corrupt them too much in the process. Since its values could get partially corrupted, an intelligence explosion that starts with an upload seems much more likely to result in outcomes that are both significantly worse than optimal and significantly better than extinction. Since human brains also already have a capacity for malice, this process also seems slightly more likely to result in outcomes worse than extinction.

The early ways to upload brains will probably be destructive, and may be very risky. Thus the first uploads may be selected for high risk-tolerance. Running an evolutionary algorithm on an uploaded brain would probably involve creating a large number of psychologically broken copies, since the average change to a brain will be negative. Thus the uploads that run evolutionary algorithms on themselves will be selected for not being horrified by this. Both of these selection effects seem like they would select against people who would take caution and goal stability seriously (uploads that run evolutionary algorithms on themselves would also be selected for being okay with creating and deleting spur copies, but this doesn't obviously correlate in either direction with caution). This could be partially mitigated by a monopoly on brain emulation technology. A possible (but probably smaller) source of positive selection is that currently, people who are enthusiastic about uploading their brains correlate strongly with people who are concerned about AI safety, and this correlation may continue once whole brain emulation technology is actually available.

Assuming that hardware speed is not close to being a limiting factor for whole brain emulation, emulations will be able to run at much faster than human speed. This should make emulations better able to monitor the behavior of AIs. Unless we develop ways of evaluating the capabilities of human brains that are much faster than giving them time to attempt difficult tasks, running evolutionary algorithms on brain emulations could only be done very slowly in subjective time (even though it may be quite fast in objective time), which would give emulations a significant advantage in monitoring such a process.

Although there are effects going in both directions, it seems like the uploads-first scenario is probably safer than de novo AI. If this is the case, then it might make sense to accelerate technologies that are needed for whole brain emulation if there are tractable ways of doing so. On the other hand, it is possible that technologies that are useful for whole brain emulation would also be useful for neuromorphic AI, which is probably very unsafe, since it is not amenable to formal verification or being given explicit goals (and unlike emulations, they don't start off already having human goals). Thus, it is probably important to be careful about not accelerating non-WBE neuromorphic AI while attempting to accelerate whole brain emulation. For instance, it seems plausible to me that getting better models of neurons would be useful for creating neuromorphic AIs while better brain scanning would not, and both technologies are necessary for brain uploading, so if that is true, it may make sense to work on improving brain scanning but not on improving neural models.

Willpower Thermodynamics

5 Fluttershy 16 August 2016 03:00AM
Content warning: a couple LWers apparently think that the concept of ego depletionalso known as willpower depletionis a memetic hazard, though I find it helpful. Also, the material presented here won't fit everyone's experiences.

What happens if we assume that the idea of ego depletion is basically correct, and try to draw an analogy between thermodynamics and willpower?

Figure 1. Thermodynamics Picture

You probably remember seeing something like the above diagram in a chemistry class. The diagram shows how unstable, or how high in energy, the states that a material can pass through in a chemical reaction are. Here's what the abbreviations mean:

  • SM is the starting material.
  • TS1 and TS2 are the two transition states, which must be passed through to go from SM to EM1 or EM2.
  • EM1 and EM2 are the two possible end materials.

The valleys of both curves represent configurations a material may occupy at the start or end of a chemical reaction. Lower energy valleys are more stable. However, higher peaks can only be reliably crossed if energy is available from e.g. the temperature being sufficiently high.

The main takeaway from Figure 1 is that reactions which produce the most stable end materials, like ending material 2, from a given set of starting materials aren't always the reactions which are easiest to make happen.

Figure 2. Willpower Picture

We can draw a similar diagram to illustrate how much stress we lose while completing a relaxing activity. Here's what the abbreviations used in Figure 2 mean:

  • SM is your starting mood.
  • TS is your state of topmost stress, which depends on which activity you choose.
  • EM1 and EM2 are your two possible ending moods.

Above, the valley on the left represents how stressed you are before starting one of two possible relaxing activities. The peak in the middle represents how stressed you'll be when attempting to get the activity underway, and the valley on the right represents how stressed you'll be once you're done.

For the sake of simplification, let's say that stress is the opposite of willpower, such that losing stress means you gain willpower, and vice versa. For many people, there's a point at which it's very hard to take on additional stress or use more willpower, such that getting started on an activity that would normally get you to ending mood 2 from an already stressed starting mood is very hard. 

In this figure, both activities restore some willpower. Activity 2 restores much more willpower, but is harder to get started on. As with chemical reactions, the most (emotionally or chemically) stable end state is not always the one that will be reached if the "easiest" activity or reaction that one can get started on is undertaken. 


 

In chemistry, if you want to make end material 2 instead of end material 1, you have to make sure that you have some way of getting over the big peak at transition state 2such as by making sure the temperature is high enough. In real life, it's also good to have a plan for getting over the big peak at the point of topmost stress. Spending time or attention figuring what your ending mood 2-producing activities are may also be worthwhile.

Some leisure activities, like browsing the front page of reddit, are ending mood 1-producing activities; they're easy to start, but not very rewarding. Examples of what qualifies as an ending mood 2-producing activity vary between peoplebut reading books, writing, hiking, meditating, or making games or art qualify as ending mood 2-producing activities for some.

At a minimum, making sure that you end up in a high willpower, low stress ending mood requires paying attention to your ability to handle stress and conserve willpower. Sometimes this implies that taking a break before you really need to means that you'll get more out of your break. Sometimes it means that you should monitor how many spoons and forks you have. In general, though, preferring ending mood 2-producing activities over ending mood 1-producing activities will give you the best results in the long run.

The best-case scenario is that you find a way to automatically turn impulses to do ending mood 1-producing activities into impulses to do ending mood 2-producing activities, such as with the trigger action plan [open Reddit -> move hands into position to do a 5-minute meditation].

Seeking Optimization of New Website "New Atheist Survival Kit," a go-to site for newly-made atheists

4 Bound_up 16 August 2016 01:03AM

I've put together a website, "New Atheist Survival Kit" at atheistkit.wordpress.com

 

The idea is to help new atheists come to terms with their change in belief, and also invite them to become more than atheists: rationalists.

 

And if it helps theists become atheists, too, and helps old atheists become rationalists, more the better.

 

The bare bones of it are all in place now. Once a few people have gone over it, for editing, and for advice about what to include, leave out, improve, re-organize, whatever, I'll ask a bunch of atheist and rationalist communities to write up their own blurb for us to include in a list of communities that we'll point people to in the "Atheist Communities" or "Thinker's Communities" sections on the main menu.

It includes my rough draft attempt to basically bring down the Metaethics sequence to a few thousand words and make it stylistically and conceptually accessible to a mass audience, which I could especially use some help with.

 

So, for now, I'm here to ask that anyone interested check it out, and message me any improvements they think worth making, from grammar and spelling all the way up to what content to include, or how to present things.

 

Thanks to all for any help.

European Soylent alternatives

9 ChristianKl 15 August 2016 08:22PM

A person at our local LW meetup (not active at LW.com) tested various Soylent alternatives that are available in Europe and wrote a post about them:

______________________

Over the course of the last three months, I've sampled parts of the
european Soylent alternatives to determine which ones would work for me
longterm.

- The prices are always for the standard option and might differ for
e.g. High Protein versions.
- The prices are always for the amount where you get the cheapest
marginal price (usually around a one month supply, i.e. 90 meals)
- Changing your diet to Soylent alternatives quickly leads to increased
flatulence for some time - I'd recommend a slow adoption.
- You can pay for all of them with Bitcoin.
- The list is sorted by overall awesomeness.

So here's my list of reviews:

Joylent:

Taste: 7/10
Texture: 7/10
Price: 5eu / day
Vegan option: Yes
Overall awesomeness: 8/10

This one is probably the european standard for nutritionally complete
meal replacements.

The texture is nice, the taste is somewhat sweet, the flavors aren't
very intensive.
They have an ok amount of different flavors but I reduced my orders to
Mango (+some Chocolate).

They offer a morning version with caffeine and a sports version with
more calories/protein.

They also offer Twennybars (similar to a cereal bar but each offers 1/5
of your daily needs), which everyone who tasted them really liked.
They're nice for those lazy times where you just don't feel like pouring
the powder, adding water and shaking before you get your meal.
They do cost 10eu per day, though.

I also like the general style. Every interaction with them was friendly,
fun and uncomplicated.


Veetal:

Taste: 8/10
Texture: 7/10
Price: 8.70 / day
Vegan option: Yes
Overall awesomeness: 8/10

This seems to be the "natural" option, apparently they add all those
healthy ingredients.

The texture is nice, the taste is sweeter than most, but not very sweet.
They don't offer flavors but the "base taste" is fine, it also works
well with some cocoa powder.

It's my favorite breakfast now and I had it ~54 of the last 60 days.
Would have been first place if not for the relatively high price.


Mana:

Taste: 6/10
Texture: 7/10
Price: 6.57 / day
Vegan option: Only Vegan
Overall awesomeness: 7/10

Mana is one of the very few choices that don't taste sweet but salty.
Among all the ones I've tried, it tastes the most similar to a classic meal.
It has a somewhat oily aftertaste that was a bit unpleasent in the
beginning but is fine now that I got used to it.

They ship the oil in small bottles seperate from the rest which you pour
into your shaker with the powder. This adds about 100% more complexity
to preparing a meal.

The packages feel somewhat recycled/biodegradable which I don't like so
much but which isn't actually a problem.

It still made it to the list of meals I want to consume on a regular
basis because it tastes so different from the others (and probably has a
different nutritional profile?).


Nano:

Taste: 7/10
Texture: 7/10
Price: 1.33eu / meal
*I couldn't figure out whether they calculate with 3 or 5 meals per day
** Price is for an order of 666 meals. I guess 222 meals for 1.5eu /meal
is the more reasonable order
Vegan option: Only Vegan
Overall awesomeness: 7/10

Has a relatively sweet taste. Only comes in the standard vanilla-ish flavor.

They offer a Veggie hot meal which is the only one besides Mana that
doesn't taste sweet. It tastes very much like a vegetable soup but was a
bit too spicy for me. (It's also a bit more expensive)

Nano has a very future-y feel about it that I like. It comes in one meal
packages which I don't like too much but that's personal preference.


Queal:

Taste: 7/10
Texture: 6/10
Price: 6.5 / day
Vegan option: No
Overall awesomeness: 7/10

Is generally similar to Joylent (especially in flavor) but seems
strictly inferior (their flavors sound more fun - but don't actually
taste better).


Nutrilent:

Taste: 6/10
Texture: 7/10
Price: 5 / day
Vegan option: No
Overall awesomeness: 6/10

Taste and flavor are also similar to Joylent but it tastes a little
worse. It comes in one meal packages which I don't fancy.


Jake:

Taste: 6/10
Texture: 7/10
Price: 7.46 / day
Vegan option: Only Vegan
Overall awesomeness: 6/10

Has a silky taste/texture (I didn't even know that was a thing before I
tried it). Only has one flavor (vanilla) which is okayish.
Also offers a light and sports option.


Huel:

Taste: 1/10
Texture: 6/10
Price: 6.70 / day
Vegan option: Only Vegan
Overall awesomeness: 4/10

The taste was unanimously rated as awful by every single person to whom
I gave it for trying. The Vanilla flavored version was a bit less awful
then the unflavored version but still...
The worst packaging - it's in huge bags that make it hard to pour and
are generally inconvenient to handle.

Apart from that, it's ok, I guess?


Ambronite:

Taste: ?
Texture: ?
Price: 30 / day
Vegan option: Only Vegan
Overall awesomeness: ?

Price was prohibitive for testing - they advertise it as being very
healthy and natural and stuff.


Fruiticio:

Taste: ?
Texture: ?
Price: 5.76 / day
Vegan option: No
Overall awesomeness: ?

They offer a variety for women and one for men. I didn't see any way for
me to find out which of those I was supposed to order. I had to give up
the ordering process at that point. (I guess you'd have to ask your
doctor which one is for you?)



Conclusion:
Meal replacements are awesome, especially when you don't have much time
to make or eat a "proper" meal.
I generally don't feel full after drinking them but also stop being hungry.
I assume they're healthier than the average European diet.
The texture and flavor do get a bit dull after a while if I only use
meal replacements.

On my usual day I eat one serving of Joylent, Veetal and Mana at the
moment (and have one or two "non-replaced" meals).

 

Identity map

5 turchin 15 August 2016 11:29AM

“Identity” here refers to the question “will my copy be me, and if yes, on which conditions?” It results in several paradoxes which I will not repeat here, hoping that they are known to the reader.

Identity is one of the most complex problems, like safe AI or aging. It only appears be simple. It is complex because it has to answer the question: “Who is who?” in the universe, that is to create a trajectory in the space of all possible minds, connecting identical or continuous observer-moments. But such a trajectory would be of the same complexity as all space of possible minds, and that is very complex.

There have been several attempts to dismiss the complexity of the identity problem, like open individualism (I am everybody) or zero-individualism (I exist only now). But they do not prevent the existence of “practical identity” which I use when planning my tomorrow or when I am afraid of future pain.

The identity problem is also very important. If we (or AI) arrive at an incorrect solution, we will end up being replaced by p-zombies or just copies-which-are-not-me during a “great uploading”. It will be a very subtle end of the world.

The identity problem is also equivalent to the immortality problem. if I am able to describe “what is me”, I would know what I need to save forever. This has practical importance now, as I am collecting data for my digital immortality (I even created a startup about it and the map will be my main contribution to it. If I solve the identity problem I will be able to sell the solution as a service http://motherboard.vice.com/read/this-transhumanist-records-everything-around-him-so-his-mind-will-live-forever)

So we need to know how much and what kind of information I should preserve in order to be resurrected by future AI. What information is enough to create a copy of me? And is information enough at all?

Moreover, the identity problem (IP) may be equivalent to the benevolent AI problem, because the first problem is, in a nutshell, “What is me” and the second is “What is good for me”. Regardless, the IP requires a solution of consciousness problem, and AI problem (that is solving the nature of intelligence) are somewhat similar topics.

I wrote 100+ pages trying to solve the IP, and became lost in the ocean of ideas. So I decided to use something like the AIXI method of problem solving: I will list all possible solutions, even the most crazy ones, and then assess them.

The following map is connected with several other maps: the map of p-zombies, the plan of future research into the identity problem, and the map of copies. http://lesswrong.com/lw/nsz/the_map_of_pzombies/

The map is based on idea that each definition of identity is also a definition of Self, and it is also strongly connected with one philosophical world view (for example, dualism). Each definition of identity answers a question “what is identical to what”. Each definition also provides its own answers to the copy problem as well as to its own definition of death - which is just the end of identity – and also presents its own idea of how to reach immortality.

 

So on the horizontal axis we have classes of solutions:

“Self" definition - corresponding identity definition - philosophical reality theory - criteria and question of identity - death and immortality definitions.

 

On the vertical axis are presented various theories of Self and identity from the most popular on the upper level to the less popular described below:

1) The group of theories which claim that a copy is not original, because some kind of non informational identity substrate exists. Different substrates: same atoms, qualia, soul or - most popular - continuity of consciousness. All of them require that the physicalism will be false. But some instruments for preserving identity could be built. For example we could preserve the same atoms or preserve the continuity of consciousness of some process like the fire of a candle. But no valid arguments exist for any of these theories. In Parfit’s terms it is a numerical identity (being the same person). It answers the question “What I will experience in the next moment of time"

2) The group of theories which claim that a copy is original, if it is informationally the same. This is the main question about the required amount of information for the identity. Some theories obviously require too much information, like the positions of all atoms in the body to be the same, and other theories obviously do not require enough information, like the DNA and the name.

3) The group of theories which see identity as a social phenomenon. My identity is defined by my location and by the ability of others to recognise me as me.

4) The group of theories which connect my identity with my ability to make plans for future actions. Identity is a meaningful is part of a decision theory.

5)  Indirect definitions of self. This a group of theories which define something with which self is strongly connected, but which is not self. It is a biological brain, space-time continuity, atoms, cells or complexity. In this situation we say that we don’t know what constitutes identity but we could know with what it is directly connected and could preserve it.

6) Identity as a sum of all its attributes, including name, documents, and recognition by other people. It is close to Leibniz’s definition of identity. Basically, it is a duck test: if it looks like a duck, swims like a duck, and quacks like a duck, then it is probably a duck. 

7) Human identity is something very different to identity of other things or possible minds, as humans have evolved to have an idea of identity, self-image, the ability to distinguish their own identity and the identity of others, and to predict its identity. So it is a complex adaptation which consists of many parts, and even if some parts are missed, they could be restored using other parts. 

There also a problem of legal identity and responsibility. 

8)  Self-determination. “Self” controls identity, creating its own criteria of identity and declaring its nature. The main idea here is that the conscious mind can redefine its identity in the most useful way. It also includes the idea that self and identity evolve during differing stages of personal human evolution. 

9) Identity is meaningless. The popularity of this subset of ideas is growing. Zero-identity and  open identity both belong to this subset. The main contra-argument here is that if we cut the idea of identity, future planning will be impossible and we will have to return to some kind of identity through the back door. The idea of identity comes also with the idea of the values of individuality. If we are replaceable like ants in an anthill, there are no identity problems. There is also no problem with murder.

 

The following is a series of even less popular theories of identity, some of them I just constructed ad hoc.

10)  Self is a subset of all thinking beings. We could see a space of all possible minds as divided into subsets, and call them separate personalities.

11)  Non-binary definitions of identity.

The idea that me or not-me identity solutions are too simple and result in all logical problems. if we define identity continuously, as a digit of the interval (0,1), we will get rid of some paradoxes and thus be able to calculate the identity level of similarity or time until the given next stage could be used as such a measure. Even a complex digit can be used if we include informational and continuous identity (in a Parfit meaning).

12) Negative definitions of identity: we could try to say what is not me.

13) Identity as overlapping observer-moments.

14) Identity as a field of indexical uncertainty, that is a group of observers to which I belong, but can’t know which one I am.

15) Conservative approach to identity. As we don’t know what identity is we should try to save as much as possible, and risk our identity only if it is the only means of survival. That means no copy/paste transportation to Mars for pleasure, but yes if it is the only chance to survive (this is my own position).

16)  Identity as individuality, i.e. uniqueness. If individuality doesn’t exist or doesn’t have any value, identity is not important.

17) Identity as a result of the ability to distinguish different people. Identity here is a property of perception.

18) Mathematical identity. Identity may be presented as a number sequence, where each number describes a full state of mind. Useful toy model.

19) Infinite identity. The main idea here is that any mind has the non-zero probability of becoming any other mind after a series of transformations. So only one identity exists in all the space of all possible minds, but the expected time for me to become a given person is dramatically different in the case of future me (1 day) and a random person (10 to the power of 100 years). This theory also needs a special version of quantum immortality which resets “memories” of a dying being to zero, resulting in something like reincarnation, or an infinitely repeating universe in the style of Nietzsche's eternal recurrence.  

20) Identity in a multilevel simulation. As we probably live in a simulation, there is a chance that it is multiplayer game in which one gamer has several avatars and can constantly have experiences through all of them. It is like one eye through several people.

21) Splitting identity. This is an idea that future identity could split into several (or infinitely many) streams. If we live in a quantum multiverse we split every second without any (perceived) problems. We are also adapted to have several future copies if we think about “me-tomorrow” and “me-the-day-after-tomorrow”.

 

This list shows only groups of identity definitions, many more smaller ideas are included in the map.

The only rational choice I see is a conservative approach, acknowledging that we don’t know the nature of identity and trying to save as much as possible of each situation in order to preserve identity.

The pdf: http://immortality-roadmap.com/identityeng8.pdf

 

 

 

 

Avoiding collapse: Grand challenges for science and society to solve by 2050

0 morganism 15 August 2016 05:47AM

"We maintain that humanity’s grand challenge is solving the intertwined problems of human population growth and overconsumption, climate change, pollution, ecosystem destruction, disease spillovers, and extinction, in order to avoid environmental tipping points that would make human life more difficult and would irrevocably damage planetary life support systems."

 

pdf onsite

https://elementascience.org/articles/94

Open Thread, Aug. 15. - Aug 21. 2016

4 Elo 15 August 2016 12:26AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

A Review of Signal Data Science

10 The_Jaded_One 14 August 2016 03:32PM

I took part in the second signal data science cohort earlier this year, and since I found out about Signal through a slatestarcodex post a few months back (it was also covered here on less wrong), I thought it would be good to return the favor and write a review of the program. 

The tl;dr version:

Going to Signal was a really good decision. I had been doing teaching work and some web development consulting previous to the program to make ends meet, and now I have a job offer as a senior machine learning researcher1. The time I spent at signal was definitely necessary for me to get this job offer, and another very attractive data science job offer that is my "second choice" job. I haven't paid anything to signal, but I will have to pay them a fraction of my salary for the next year, capped at 10% and a maximum payment of $25k. 

The longer version:

Obviously a ~12 week curriculum is not going to be a magic pill that turns a nontechnical, averagely intelligent person into a super-genius with job offers from Google and Facebook. In order to benefit from Signal, you should already be somewhat above average in terms of intelligence and intellectual curiosity. If you have never programmed and/or never studied mathematics beyond high school2 , you will probably not benefit from Signal in my opinion. Also, if you don't already understand statistics and probability to a good degree, they will not have time to teach you. What they will do is teach you how to be really good with R, make you do some practical machine learning and learn some SQL, all of which are hugely important for passing data science job interviews. As a bonus, you may be lucky enough (as I was) to explore more advanced machine learning techniques with other program participants or alumni and build some experience for yourself as a machine learning hacker. 

As stated above, you don't pay anything up front, and cheap accommodation is available. If you are in a situation similar to mine, not paying up front is a huge bonus. The salary fraction is comparatively small, too, and it only lasts for one year. I almost feel like I am underpaying them. 

This critical comment by fluttershy almost put me off, and I'm glad it didn't. The program is not exactly "self-directed" - there is a daily schedule and a clear path to work through, though they are flexible about it. Admittedly there isn't a constant feed of staff time for your every whim - ideally there would be 10-20 Jonahs, one per student; there's no way to offer that kind of service at a reasonable price. Communication between staff and students seemed to be very good, and key aspects of the program were well organised. So don't let perfect be the enemy of good: what you're getting is an excellent focused training program to learn R and some basic machine learning, and that's what you need to progress to the next stage of your career.

Our TA for the cohort, Andrew Ho, worked tirelessly to make sure our needs were met, both academically and in terms of running the house. Jonah was extremely helpful when you needed to debug something or clarify a misunderstanding. His lectures on selected topics were excellent. Robert's Saturday sessions on interview technique were good, though I felt that over time they became less valuable as some people got more out of interview practice than others. 

I am still in touch with some people I met on my cohort, even though I had to leave the country, I consider them pals and we keep in touch about how our job searches are going. People have offered to recommend me to companies as a result of Signal. As a networking push, going to Signal is certainly a good move. 

Highly recommended for smart people who need a helping hand to launch a technical career in data science.

 


 

1: I haven't signed the contract yet as my new boss is on holiday, but I fully intend to follow up when that process completes (or not). Watch this space. 

2: or equivalent - if you can do mathematics such as matrix algebra, know what the normal distribution is, understand basic probability theory such as how to calculate the expected value of a dice roll, etc, you are probably fine. 

Help with Bayesian priors

4 WikiLogicOrg 14 August 2016 10:24AM

I posted before about an open source decision making web site I am working on called WikiLogic. The site has a 2 minute explanatory animation if you are interested. I wont repeat myself but the tl;dr is that it will follow the Wikipedia model of allowing everyone to collaborate on a giant connected database of arguments where previously established claims can be used as supporting evidence for new claims.

The raw deduction element of it works fine and would be great in a perfect world where such a thing as absolute truths existed, however in reality we normally have to deal with claims that are just the most probable. My program allows opposing claims to be connected and then evidence to be gathered for each. The evidence will create a probability of it being correct and which ever is highest, gets marked as best answer. Principles such as Occams Razor are applied automatically as long list of claims used as evidence will be less likely as each claim will have its own likelihood which will dilute its strength.

However, my only qualification in this area is my passion and I am hitting a wall with some basic questions. I am not sure if this is the correct place to get help with these. If not, please direct me somewhere else and I will remove the post.

 

The arbitrarily chosen example claim I am working with is whether “Alexander the Great existed”. This has the useful properties of 1: an expected outcome (that he existed - although, perhaps my problem is that this is not the case!) and 2: it relies heavily on probability as there is little solid evidence.

One popular claim is that coins were minted with his face on them. I want to use Bayes to find how likely a face appearing on a coin is for someone who existed. As I understand it, there should be 4 combinations:

  1. Existed; Had a coin minted
  2. Existed; Did not have a coin minted
  3. No Existed; Had a coin minted
  4. No Existed; Did not have a coin minted

 

The first issue is that there are infinite people who never existed and did not have a coin made. If I narrow it to historic figures who turned out not to exist and did not have a coin made it becomes possible but also becomes subjective as to whether someone actually thought they existed. For example, did people believe the Minotaur existed?

Perhaps I should choose another filter instead of historic figure, like humans that existed. But picking and choosing the category is again so subjective. Someone may also argue that woman inequality back then was so great that the data should only look at men, as a woman’s chance of being portrayed on a coin was skewed in a way that isn’t applicable to men.

I hope i have successfully communicated the problem i am grappling with and what i want to use it for. If not, please ask for clarifications. A friend in academia suggested that this touches on a problem with Bayes priors that has not been settled. If that is the case, is there any suggested resources for a novice with limited free time, to start to explore the issue? References to books or other online resources or even somewhere else I should be posting this kind of question would all be gratefully received. Not to mention a direct answer in the comments!

New LW Meetup: Munich

0 FrankAdamek 12 August 2016 03:51PM

This summary was posted to LW Main on August 12th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

"Is Science Broken?" is underspecified

7 NancyLebovitz 12 August 2016 11:59AM

http://fivethirtyeight.com/features/science-isnt-broken/

This is an interesting article-- it's got an overview of what's currently seen as the problems with replicability and fraud, and some material I haven't seen before about handing the same question to a bunch of scientists, and looking at how they come up with their divergent answers.

However, while I think it's fair to say that science is really hard, the article gets into claiming that scientists aren't especially awful people (probably true), but doesnn't address the hard question of "Given that there's a lot of inaccurate science, how much should we trust specific scientific claims?"

Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today!

2 The_Jaded_One 11 August 2016 05:29PM

OK, slight disclaimer, this is a bit of a joke article inspired by me watching a few recent videos and news reports about cryonics. Nevertheless, there is a serious side to it. 

Many people claim that it is irrational to sign up for cryonics, and getting into the nitty gritty with them about how likely it is to work seems to turn into a series of small skirmishes with no particular "win condition". Opponents will not say,

"OK, I will value my life at $X and if you can convince me that (cryonics success probability)*$X is greater than the $1/day fee, I will concede the argument".

Rather, they will retreat to a series of ever harder to falsify positions, usually ending up at a position which is so vague that it is basically pure mood affiliation and acts as a way to stop the conversation rather than as a true objection. I have seen it many times with friends. 

So, I propose that before you debate someone about cryonics, you should first try to sign then up for inverse cryonics. Inverse cryonics is a very simple procedure, fully scientifically tested that anyone can sign up for today, as long as they have a reasonably well-off benefactor to take the "other side" of the bet. Let me explain.

The inverse cryonics patient takes a simple revolver with 6 barrels, with one bullet loaded and spins the barrel on the gun, then shoots themselves once in the head1. If the inverse cryonaut is unlucky enough to shoot themselves with a barrel containing a real bullet, they will blow their brains out and die instantly and permanently. However, if they are lucky, the benefactor must pay them $1 per day for the rest of their lives. 

Obviously you can vary the risk, rewards and timings of inverse cryonics. The death event could be postponed for 20 years, the risk could be cranked up or down, and the reward could be increased or decreased or paid out as a future discounted lump sum. The key is that signing up for inverse cryonics should be mathematically identical to not signing up for cryonics.

As a baseline, cryonics seems to cost ~$1/day for the rest of your life in order to avoid a ~1/10 chance of dying2. Most people3 would not play ~10-barrel Russian Roulette for a $1/day stipend, even with delayed death or an instant ~$50k payout. 

In fact,

  • if you believe that cryonics costs ~$1/day for the rest of your life in order to avoid a ~1/10 chance of dying4  and
  • you are offered 11-barrel Russian roulette for that same ~$1/day as a stipend, or even an instant $50k payout
then
  • as a rational agent you shouldn't refuse both offers

Of course, I'm sure opponents of cryonics won't bite this particular bullet, but at the very least it may provide an extra intuition pump to move people away from objecting to cryonics because it's the "risky" option. 

Comments and criticisms welcome. 

 

 

 

 


1. Depending on the specific deal, more than six barrels could be used, or several identical guns could be used where only one barrel from one gun contains a real bullet, allowing one to achieve a reasonable range of probabilities for "losing" at inverse cryonics from 1 in 6 to perhaps one in 60 with ten guns. 

2. And pushing the probability of cryonics working down much further seems to be very hard to defend scientifically, not that people haven't tried. It becomes especially hard when you assume that the cryonics organizations stick around for ~40 years, and society sticks around without major disruptions in order for a young potential cryonaut who signs up today to actually pay their life insurance fees every day until they die. 

3. Most intelligent, sane, relatively well-off people in the developed world, i.e. the kind of people who reject cryonics. 

4. And you believe that the life you miss out on in the future will be as good, or better than, the life you are about to live from today until your natural death at a fixed age of, say, 75. 

 

 

Non-Fiction Book Reviews

9 SquirrelInHell 11 August 2016 05:05AM

Time start 13:35:06

For another exercise in speed writing, I wanted to share a few book reviews.

These are fairly well known, however there is a chance you haven't read all of them - in which case, this might be helpful.

 

Good and Real - Gary Drescher ★★★★★

This is one of my favourite books ever. Goes over a lot of philosophy, while showing a lot of clear thinking and meta-thinking. Number one replacement for Eliezer's meta-philosophy, if it had not existed. The writing style and language is somewhat obscure, but this book is too brilliant to be spoiled by that. The biggest takeaway is the analysis of ethics of non-causal consequences of our choices, which is something that actually has changed how I act in my life, and I have not seen any similar argument in other sources that would do the same. This book changed my intuitions so much that I now pay $100 in counterfactual mugging without second thought.

 

59 Seconds - Richard Wiseman ★★★

A collection of various tips and tricks, directly based on studies. The strength of the book is that it gives easy but detailed descriptions of lots of studies, and that makes it very fun to read. Can be read just to check out the various psychology results in an entertaining format. The quality of the advice is disputable, and it is mostly the kind of advice that only applies to small things and does not change much in what you do even if you somehow manage to use it. But I still liked this book, and it managed to avoid saying anything very stupid while saying a lot of things. It counts for something.

 

What You Can Change and What You Can't - Martin Seligman ★★★

It is a heartwarming to see that the author puts his best effort towards figuring out what psychology treatments work, and which don't, as well as builiding more general models of how people work that can predict what treatments have a chance in the first place. Not all of the content is necessarily your best guess, after updating on new results (the book is quite old). However if you are starting out, this book will serve excellently as your prior, on which you can update after checking out the new results. And also in some cases, it is amazing that the author was right about them 20 years ago, and mainstream psychology is STILL not caught up (like the whole bullshit "go back to your childhood to fix your problems" approach, which is in wide use today and not bothered at all by such things as "checking facts").

 

Thinking, Fast and Slow - Daniel Kahneman ★★★★★

A classic, and I want to mention it just in case. It is too valuable not to read. Period. It turns out some of the studies the author used for his claims have been later found not to replicate. However the details of those results is not (at least for me) a selling point of this book. The biggest thing is the author's mental toolbox for self-analysis and analysis of biases, as well concepts that he created to describe the mechanisms of intuitive judgement. Learn to think like the author, and you are 10 years ahead in your study of rationality.

 

Crucial Conversations - Al Switzler, Joseph Grenny, Kerry Patterson, Ron McMillan ★★★★

I have almost dropped this book. When I saw the style, it reminded me so much of the crappy self-help books without actual content. But fortunately I have read on a litte more, and it turns out that even while the style is the same in the whole book and it has litte content for the amount of text you read, it is still an excellent book. How is that possible? Simple: it only tells you a few things, but the things it tells you are actually important and they work and they are amazing when you put them into practice. Also on the concept and analysis side, there is precious little but who cares as long as there are some things that are "keepers". The authors spend most of the book hammering the same point over and over, which is "conversation safety". And it is still a good book: if you get this one simple point than you have learned more than you might from reading 10 other books.

 

How to Fail at Almost Everything and Still Win Big - Scott Adams ★★★

I don't agree with much of the stuff that is in this book, but that's not the point here. The author says what he thinks, and also he himself encourages you to pass it through your own filters. Around one third of the book, I thought it was obviously true; another one third, I had strong evidence that told me the author made a mistake or got confused about something; and the remaining one third gave me new ideas, or points of view that I could use to produce more ideas for my own use. This felt kind of like having a conversation with any intelligent person you might know, who has different ideas from you. It was a healthy ratio of agreement and disagreement, such that leads to progress for both people. Except of course in this case the author did not benefit, but I did.

 

Time end: 14:01:54

Total time to write this post: 26 minutes 48 seconds

Average writing speed: 31.2 words/minute, 169 characters/minute

The same data calculated for my previous speed-writing post: 30.1 words/minute, 167 characters/minute

Open Thread, Aug. 8 - Aug 14. 2016

3 Elo 07 August 2016 11:07PM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Advice to new Doctors starting practice

1 anandjeyahar 07 August 2016 08:27AM

Hi all,

Please read the Disclaimers at the end of the post first, if you're easily offended.

 

Generalists(general medicine):

  1. Get extremely unbeatable at 20 Questions(rationality link). It'll help you make your initial diagnoses(ones based on questions about symptoms) faster and more accurate.
  2. Understand probability, bayes theorem and how to apply it** This will help you interpret the test results, you ordered based on the 20 questions.
  3.  Understand base rate fallacy, and how to avoid  being over confident.
  4. Understand the upsides and downsides of the drugs you prescribe. Know the probabilities of fatal and adverse side-effects and update them with evidence(Bayes' theorem mentioned above) as you try out different brands and combinations.
  5. Know the costs and benefits of any treatment and help the patient make a good decision based on the cost-benefit analysis of treatment combined with the probabilities of outcome.
  6. Ask and Keep a history of medical records and allergies of the patient and till their grand parents.*
  7. Be willing and able to judge, when a patient is better off with a specialist. Try to keep in touch with Doctors nearby and hopeful all types of specialists.
  8. Explain the treatment options and pros and cons in easy language to the patients. It'll reduce misunderstandings and eventually dis-satisfaction with the treatment.
  9. Resist the urge to treat patients as NPCs. Involve them in the treatment process.
  10. Meditate
  11. Find a hobby, that you can keep improving on till the end of life.
  12. Be aware of the conflict of interest between the patient and the pharmaceutical companies.
  13. Have enough research skills to form opinions on base rates/probabilities in different diseases and treatment methods as needed.
  14. If you're in a big hospital setup, make sure you've the best hospital administration. 
  15. Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.Scheduling, filing and communication. Lacking these, medical expertise is meaningless. 

Specialists:

Basically the same skill sets as above. One difference is in the skill level and you should customize that as needed.

  1. For ex: You would need to be able to explain the treatment options and the probabilistic nature of the outcomes to your patients.
  2. As for research, keep a track of progress in your area in treatment methods and different outcomes on the "quality of life" for the patients after the treatment.
  3. Better applied Bayesian skills. In the sense of figuring out independent variables and their probabilities affecting the outcome.

 

Some controversial ideas(Better use your common-sense before trying out):

  1.  Experiment a little with your bio-chemistry and see how they affect your thought-processes.  To be safe, stick to biologically produced ones. For ex: injecting self with a small adrenalin dose and monitoring bodily response can help keep your thinking clear in emergency situations.
  2. Know your self biology better. For ex: male vs female differences mean the adrenalin response is different and peaks later in females.  If you think that's wrong, please go back and check your course work. Also watch this 2 hour video and come back with objections after reading the studies he quotes.
  3. Keep regularly(whatever frequency your practice and nature of work demands) checking your(for ex; hormone levels) blood states, so that you  can start regulating your self for optimal decision-making skills.
  4. If you're a woman, you'll customize practice on some of the skill sets above differently. For ex: Mastery over emotions might need more practice, while empathizing/connecting with the patient might be easier.

Disclaimers:

  1. Most of what follows is based on my experiences(either as a patient myself or a concerned relative) with Indian Doctors. Some of it may be trivial, to others, but most of it is skills a doc will need and ignored in school.
  2.  I've split it in two (specialists and generalists) but there's a fair amount of overlap.
  3. These are fairly high standards, but worth shooting for and I've kept the focus on smart rather than hard work.
  4. I've stayed from a few topics like: bedside manners/social skills, specific medical treatments and conditions(obviously, I'm not a Doctor after all) and a few others, you can add/delete(also specify/pick levels) as you see fit.
  5. Pick the skill-levels as demanded by your client population and adjust.
  6. I'm assuming generalists, don't have to deal with emergency cases, but in some parts, that's not likely then pick common emergency areas and follow specialist advice.
  7. I wrote this based on my experiences and with humans in mind, but veterinary Doctors may find some useful too.

* -- I understand this is difficult in Indian circumstances, but I've seen it being done manually(simply leaves of prescriptions organized alphabetically, link to dr.rathinavel) , so it's possible and worth the effort unless, you practice in area of highly migratory population.(for example rural vs urban areas).

**-- If you're trying to compete on availability for consultation, you'll need to be able to do this after being woken in the middle of the night.

 

I'm hoping to convert it into a rationalist skills for Doctors Wiki page, so please provide feedback, especially if you're practicing Doctors. If you don't want to post publicly email me(in profile) or comment on wordpress.

Now is the time to eliminate mosquitoes

20 James_Miller 06 August 2016 07:10PM

“In 2015, there were roughly 214 million malaria cases and an estimated 438 000 malaria deaths.”  While we don’t know how many humans malaria has killed, an estimate of half of everyone who has ever died isn’t absurd.  Because few people in rich countries get malaria, pharmaceutical companies put relatively few resources into combating it.   

 

The best way to eliminate malaria is probably to use gene drives to completely eradicate the species of mosquitoes that bite humans, but until recently rich countries haven’t been motivated to such xenocide.  The Zika virus, which is in mosquitoes in the United States, provides effective altruists with an opportunity to advocate for exterminating all species of mosquitoes that spread disease to humans because the horrifying and disgusting pictures of babies with Zika might make the American public receptive to our arguments.  A leading short-term goal of effective altruists, I propose, should be advocating for mosquito eradication in the short window before rich people get acclimated to pictures of Zika babies.   

 

Personally, I have (unsuccessfully) pitched articles on mosquito eradication to two magazines and (with a bit more success) emailed someone who knows someone who knows someone in the Trump campaign to attempt to get the candidate to come out in favor of mosquito eradication.  What have you done?   Given the enormous harm mosquitoes inflict on mankind, doing just a little (such as writing a blog post) could have a high expected payoff.

 

Darknet Mining for Proactive Cybersecurity Threat Intelligence

3 morganism 06 August 2016 01:19AM

They are using machine learning to comb the darknets, capturing about 300 threats a week.

About 90% hack application and backdoor recognition, that is for sale, and about 80% hacker forum vulnerability identification.

"These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack"

https://arxiv.org/abs/1607.08583

Weekly LW Meetups

0 FrankAdamek 05 August 2016 03:42PM

This summary was posted to LW Main on August 5th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Fairness in machine learning decisions

2 Stuart_Armstrong 05 August 2016 09:56AM

There's been some recent work on ensuring fairness in automated decision making, especially around sensitive areas such as racial groups. The paper "Censoring Representations with an Adversary" looks at one way of doing this.

It looks at a binary classification task where X ⊂ Rn and Y = {0, 1} is the (output) label set. There is also S = {0, 1} which is a protected variable label set. The definition of fairness is that, if η : X → Y is your classifier, then η should be independent of S. Specifically:

  • P(η(X)=1|S=1) = P(η(X)=1|S=0)

There is a measure of discrimination, which is the extent to which the classifier violates that fairness assumption. The paper then suggests to tradeoff optimise the difference between discrimination and classification accuracy.

But this is problematic, because it risks throwing away highly relevant information. Consider redlining, the practice of denying services to residents of certain areas based on the racial or ethnic makeups of those areas. This is the kind of practice we want to avoid. However, generally the residents of these areas will be poorer than the average population. So if Y is approval for mortgages or certain financial services, a fair algorithm would essentially be required to reach a decision that ignores this income gap.

And it doesn't seem the tradeoff with accuracy is a good way of compensating for this. Instead, a better idea would be to specifically allow certain variables to be considered. For example, let T be another variable (say, income) that we want to allow. Then fairness would be defined as:

  • ∀t, P(η(X)=1|S=1, T=t) = P(η(X)=1|S=0, T=t)

What this means is that T can distinguish between S=0 and S=1, but, once we know the value of T, we can't deduce anything further about S from η. For instance, once the bank knows your income, it should be blind to other factors.

Of course, with enough T variables, S can be determined with precision. So each T variable should be fully justified, and in general, it must not be easy to establish the value of S via T.

New Pascal's Mugging idea for potential solution

2 kokotajlod 04 August 2016 08:38PM

I'll keep this quick:

In general, the problem presented by the Mugging is this: As we examine the utility of a given act for each possible world we could be in, in order from most probable to least probable, the utilities can grow much faster than the probabilities shrink. Thus it seems that the standard maxim "Maximize expected utility" is impossible to carry out, since there is no such maximum. When we go down the list of hypotheses multiplying the utility of the act on that hypothesis, by the probability of that hypothesis, the result does not converge to anything. 

Here's an idea that may fix this:

For every possible world W of complexity N, there's another possible world of complexity N+c that's just like W, except that it has two parallel, identical universes instead of just one. (If it matters, suppose that they are connected by an extra dimension.) (If this isn't obvious, say so and I can explain.)

Moreover, there's another possible world of complexity N+c+1 that's just like W except that it has four such parallel identical universes.

And a world of complexity N+c+X that has R parallel identical universes, where R is the largest number that can be specified in X bits of information. 

So, take any given extreme mugger hypothesis like "I'm a matrix lord who will kill 3^^^^3 people if you don't give me $5." Uncontroversially, the probability of this hypothesis will be something much smaller than the probability of the default hypothesis. Let's be conservative and say the ratio is 1 in a billion. 

(Here's the part I'm not so confident in)

Translating that into hypotheses with complexity values, that means that the mugger hypothesis has about 30 more bits of information in it than the default hypothesis. 

So, assuming c is small (and actually I think this assumption can be done away with) there's another hypothesis, equally likely to the Mugger hypothesis, which is that you are in a duplicate universe that is exactly like the universe in the default hypothesis, except with R duplicates, where R is the largest number we can specify in 30 bits.

That number is very large indeed. (See the Busy Beaver function.) My guess is that it's going to be way way way larger than 3^^^^3. (It takes less than 30 bits to specify 3^^^^3, no?)

So this isn't exactly a formal solution yet, but it seems like it might be on to something. Perhaps our expected utility converges after all.

Thoughts?

(I'm very confused about all this which is why I'm posting it in the first place.)

 

Superintelligence and physical law

8 AnthonyC 04 August 2016 06:49PM

It's been a few years since I read http://lesswrong.com/lw/qj/einsteins_speed/ and the rest of the quantum physics sequence, but I recently learned about the company Nutonian, http://www.nutonian.com/. Basically it's a narrow AI system that looks at unstructured data and tries out billions of models to fit it, favoring those that use simpler math. They apply it to all sorts of fields, but that includes physics. It can't find Newton's laws from three frames of a falling apple, but it did find the Hamiltonian of a double pendulum given its motion data after a few hours of processing: http://phys.org/news/2009-12-eureqa-robot-scientist-video.html

Motivated Thinking

3 Bound_up 03 August 2016 11:27PM

I'm playing around with an article on Motivated Cognition for general consumption

 

I think it's one of the most important things to teach someone about rationality (any other suggestions? Confirmation bias, placebo, pareidolia, and the odds of coincidences come to mind...)

 

So, I've taken the five kinds of motivated cognition I know of 

(Motivated skepticism)

 

(Motivated stopping)

 

(Motivated neutrality)

 

(Motivated credulity)

 

(Motivated continuation)

 

added a counterpart to "neutrality," and then renamed neutrality.

 

The end result being six kinds of motivated cognition, three pairs of two kinds each, which are opposites of each other. Also, each pair has one kind that beings with an S and the other that begins with a C, which is good for mnemonic purposes.

 

So, I've got

Stopping and Continuation - Controls WHICH arguments you put in front of yourself (Do you continue because you haven't found what supports you yet, or do you stop because you have?)

Self-deprecation and Conceit - these control WHETHER you judge an argument in front of you (Do you refuse to judge ("Who am I to judge?") clear arguments that oppose your side or do you judge arguments you have no capacity to understand (the probability of abiogenesis, for example) because it lets you support your side?)

Skepticism and Credulity - Controls HOW you judge arguments (Do you demand higher evidence for ideas you don't like, and less for ideas you do? Do you scrutinize ideas you don't like more than ideas you do? Do you ask if the evidence forces you to accept, or if it allows you to accept an idea?)

 

I'm thinking of introducing them in that order, too, with the "Which/Whether/How you judge" abstraction.

 

Anybody see better abstractions, better explanations, better mnemonic techniques? Any advice of any kind on how to teach this effectively to people? Other fundamentals to rationality? (Maybe the beliefs as probabilities idea?)

 

[link] MIRI's 2015 in review

9 Kaj_Sotala 03 August 2016 12:03PM

https://intelligence.org/2016/07/29/2015-in-review/

The introduction:

As Luke had done in years past (see 2013 in review and 2014 in review), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update. Here, I’ll take a look back at 2015, focusing on our research progress, academic and general outreach, fundraising, and other activities.

After seeing signs in 2014 that interest in AI safety issues was on the rise, we made plans to grow our research team. Fueled by the response to Bostrom’s Superintelligence and the Future of Life Institute’s “Future of AI” conference, interest continued to grow in 2015. This suggested that we could afford to accelerate our plans, but it wasn’t clear how quickly.

In 2015 we did not release a mid-year strategic plan, as Luke did in 2014. Instead, we laid out various conditional strategies dependent on how much funding we raised during our 2015 Summer Fundraiser. The response was great; we had our most successful fundraiser to date. We hit our first two funding targets (and then some), and set out on an accelerated 2015/2016 growth plan.

As a result, 2015 was a big year for MIRI. After publishing our technical agenda at the start of the year, we made progress on many of the open problems it outlined, doubled the size of our core research team, strengthened our connections with industry groups and academics, and raised enough funds to maintain our growth trajectory. We’re very grateful to all our supporters, without whom this progress wouldn’t have been possible.

Irrationality Quotes August 2016

5 PhilGoetz 01 August 2016 07:12PM

Rationality quotes are self-explanatory.  Irrationality quotes often need some context and explication, so they would break the flow in Rationality Quotes.

Rationality Quotes August 2016

2 bbleeker 01 August 2016 09:32AM

Another month, another rationality quotes thread. The rules are:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
  • Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

August 2016 Media Thread

1 ArisKatsaris 01 August 2016 07:00AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Open Thread, Aug. 1 - Aug 7. 2016

3 Elo 01 August 2016 12:12AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Clean work gets dirty with time

1 Romashka 31 July 2016 07:59PM
Edited for clarity (hopefully) with thanks to Squirrell_in_Hell.

Lately, I find myself more and more interested in how the concept of "systematized winning" can be applied to large groups of people who have one thing in common, and that not even time, but a hobby or a general interest in a specific discipline. It doesn't seem (to me) to much trouble people working on their own individual qualities - performers, martial artists, managers (who would self-identify as belonging to these sets), but I am basing this on "general impressions" and will be glad to be corrected. It does seem to be a norm for some other sets, like sailors, who keep correcting maps every voyage.

The field in which I have been for some years (botany) does have something similar to what sailors do, which lets us to see how floras change over time, etc. However, different questions arise when novel sub-disciplines branch off the main trunk, and naturally, the people asking these new questions keep reaching back for some kind of pre-existing observations. And often they don't check how much weight can be assigned to these observations, which, I think, is a bad habit that won't lead to "winning".

It is not "industrial rationality" per se, but a distantly related thing, and I think we might have to recognize it somehow. Or at least, recognize that it requires different assumptions... No set victory, for example... Still, it probably matters to more living people than pure "industrial rationality" does, & ignoring it won't make it go away.

continue reading »

Should we enable public binding precommitments?

0 capybaralet 31 July 2016 07:47PM

The ability to make arbitrary public binding precommitments seems like a powerful tool for solving coordination problems.

We'd like to be able to commit to cooperating with anyone who will cooperate with us, as in the open-source prisoner's dilemma (although this simple case is still an open problem, AFAIK).  But we should be able to do this piece-meal.

It seems like we are moving in this direction, with things like Etherium that enable smart contracts.  Technology should enable us to enforce more real-world precommitments, since we'll be able to more easily monitor and make public our private data.

Optimistically, I think this could allow us to solve coordination issues robustly enough to have a very low probability of any individual actor making an unsafe AI.  This would require a lot of people to make the right kind of precommitments.

I'm guesing there are a lot of potential downsides and ways it could go wrong, which y'all might want to point out.

[Effective Altruism] Promoting Effective Giving at Conferences via Speed Giving Games

3 Gleb_Tsipursky 30 July 2016 03:16PM

Conferences provide a high-impact opportunity to promote effective giving. This is the broad take-away from an experiment in promoting effective giving at two conferences in recent months: the Unitarian Universalist (UU) General Assembly and the Secular Student Alliance (SSA) National Convention. This was an experiment run by Intentional Insights (InIn), an EA meta-charity devoted to promoting effective giving and rational thinking to a broad audience, with financial sponsorship from The Life You Can Save.

 

The outcomes, as detailed below, suggest that conferences can offer cost-effective opportunities to communicate effective giving messages to important stakeholders. An especially promising way to do so is to use Speed Giving Games (SGG) as a low-threshold strategy since recent findings show GGs are an excellent means of promoting effective giving. This encourages participants to self-organize full-length Giving Games (GG) when they return back to their homes.

 

This article aims both to describe our experiences at UU and SSA and to serve as a guide to others who want to adopt these approaches to promote effective giving via conferences. The article is thus divided into several parts:

  • Evaluating the demographic group you want to target;

  • Evaluating the potential impact and cost of the conference;

  • Steps to prepare for the conference;

  • Outcomes of the conference;

  • Assessment of the experiment and conclusions;


Picking the Right Conference: Consider Demographics

 

Before deciding on a conference, make sure you target the right demographic. We at InIn, in agreement with The Life You Can Save, picked the two conferences mentioned above for a couple of reasons.

 

First, the UU and SSA both unite people who we thought were well-suited for promoting effective giving. Members of these organizations already put a considerable value both on improving the world, and on using reason and evidence to inform their actions in doing so.

 

Our work at SSA is part of our broader effort, in collaboration with The Life You Can Save and the Local Effective Altruist Network, to promote effective giving to secular, humanist, and skeptic groups. We do so by holding GGs targeted to their needs: appearing on podcasts, writing articles in secular venues about effective giving, and collaborating with a number of national and international common-interest organizations. Besides the SSA, this includes the Foundation Beyond Belief, United Coalition of Reason, American Humanist Association, International Humanist Ethical Union, and others.

 

The UU religious denomination is a more experimental focus group. It builds upon the success of the above-mentioned project, and expands to promote effective giving to people who are still somewhat reason-oriented, even if reason is less central for them. Yet UU members are strongly committed to action to improve the world, and generally show more active efforts on the social justice and civic engagement front than members of the secular, humanist, and skeptic movement. Thus, we at InIn and The Life You Can Save decided to target them as well.

 

Second, picking the right demographic also means having at least some people who are familiar with the language, needs, desires, and passions of the niche group you are targeting, and have some connections within it. Knowing the interests and language of the demographics is really valuable for understanding how to frame the concept of effective giving to those demographics. Having people with pre-existing connections and networks within that demographic allows you to approach them as an insider, giving you instant credibility and much more leverage when introducing the audience to an unfamiliar concept.

 

For the SSA, we had it easy, due to our extensive connections in the secular/skeptic/humanist movement. The SSA Executive Director is on the Intentional Insights Advisory Board,  our members regularly appear on podcasts and write for venues within that movement, and many of our members attend local humanist/secular/skeptic groups.

 

We had fewer connections in UU, but the ones that we did have were sufficient. Our two co-founders and some of our members attend UU churches. Intentional insights creates curriculum content for the UU movement, appears on relevant podcasts and writes for major venues. This proved to be more than enough familiarity from the perspective of knowing the language and interests.

 

Picking the Right Conference: Consider Impact and Costs

 

After choosing the right demographic, consider and balance the potential impact and effectiveness of each conference.

 

Number and influence of attendees:

 

Both the UU and the secular/skeptic/humanist movements hold a number of conferences. Fortunately, a single annual conference unites the whole UU movement, with over 3,500 UU leaders from around the world coming. Moreover, the people who come to the UU General Assembly constitute the most active members of the movement – Ministers, Religious Education Directors, church staff, lay leaders and prominent writers – in other words, those stakeholders most capable of spreading effective giving ideas into the UU community.

 

The SSA event had far fewer people, with just over 200 attendees. However, many movers and shakers from the secular/skeptic/humanist movement attend the conference. This makes it attractive from the perspective of spreading effective giving ideas in the movement.

 

Impact of your role at conference:

 

First, most conferences have tabling opportunities for exhibitors, and as an exhibitor, you can hold SGGs at your table. We did that both at the SSA and UU, and I doubt we would have gone to either without that opportunity, since we found it to be very effective at promoting effective giving.

 

Caption: Intentional Insights table at the Secular Student Alliance conference (courtesy of InIn)

 

Second, if you have an opportunity to be a speaker and can promote effective giving at your talk, this raises the impact you can make at a conference. That said, unless you can focus your talk on effective giving or at least give out relevant materials and sign-up sheets, simply mentioning effective giving may not be that impactful. It all depends on how you go about it, and whether the concept is relevant to your talk and memorable to the audience. I was a speaker at the SSA, and worked effective giving into my talk without focusing on it, as well as distributed relevant materials about effective giving.

 

Third, consider whether you have specific networking opportunities at a conference that are  helpful for promoting effective giving. For instance, this might involve having small-group or one-on-one meetings with influencers where you can safely promote effective giving without seeming pushy. At both the SSA and UU, we had both pre-scheduled and spontaneous meetings with notable people, which allowed us to promote effective giving concepts.

 

Costs: One of the fundamental aspects of effective giving is cost-effectiveness, and it is important to apply this metric to marketing effective giving, as well.

 

For the experiment with promoting effective giving at conferences, we at InIn decided to collaborate with The Life You Can Save on the most low-cost opportunities. Thus, one of the reasons we chose the UU and SSA conventions is that they both happened in Columbus, where InIn is based. InIn provided the people who ran the table and did the networking, and The Life You Can Save covered fees for conference registration, tabling, and other miscellaneous fees.

 

The UUA conference registration is around $450 per participant, and $800 for a table. Fortunately, as InIn is a member of a UU organization through which we promote Giving Games and other InIn materials, we were able to use a table at a discount, for $200. Miscellaneous fees included parking and food, for around $20 per participant per day. We had 2 people at the conference each day, so for the 5-day conference, that was $200. We also had about $175 in marketing costs to design and print flyers. We registered only one person, as we got one free participant with a table, so the total cost came down to $1025.

 

The SSA conference registration fee is around $135 per participant, and $150 for a table. As a speaker, I got a free registration, and another free registration accompanied the table. Parking and food cost $140 for the 3-day conference, and marketing costs came out to $150, for a total of $340.

 

Prepare Well

 

To prepare for the conferences, we at InIn brainstormed about the appropriate ways to present effective giving at both conferences. We then prepared talking points relevant to each audience, and coordinated with all people who would table at both conferences to ensure they knew how to present effective giving to the two audiences well.

 

As an example, you can see the GGs packet adapted to the language and interests of the SSA here and UU here. The main modifications are in the “Activity Overview” section, and these changes represent the broad difference in the kind of language we used.

 

Besides the language, we put a lot of effort into designing attractive marketing materials for our table. We created a large sign, visible from a long distance, with “Free Money” in red. People are attracted both to the color red and to the phrase “Free Money,” and it is highly important to draw attention in the context of a busy conference.

 

Caption: SGG activity overview for both UU and SSA conferences (courtesy of InIn)

 

We hired a professional designer to compose an attractive layout for the SGG activity at our table. SGGs involve having people make a decision between two charities. Their vote results in a dollar each going to either charity, sponsored by an outside party, usually The Life You Can Save. It was important to create a nice layout that people could engage with quickly and easily, again due to distractions in the conference setting. We chose GiveDirectly as the effective charity, and the Mid-Ohio Food Bank as a local and not so effective charity.

 

For those who participated in SGGs, then aimed at getting them to sign up for the InIn newsletter and The Life You Can Save newsletter, and engaging with them in conversations about effective giving. We also printed out shorter versions of the UU and SSA Giving Games packets. These had brief descriptions of the full Giving Games, with links to the longer versions they could host back in their SSA student clubs or UU congregations.

 

Another thing we did is schedule meetings in advance with some influencers to discuss effective giving opportunities. We also made sure to schedule meetings spontaneously during the conference with notables who seemed interested in effective giving. For those who expressed an interest but did not have time to meet, we made sure to exchange contact information and follow up afterwards.

 

Finally, we applied to be speakers at both conferences. We succeeded with the SSA, but not with UU. Still, we decided to attend the UU conference, because the costs were low enough since we did not have to travel and The Life You Can Save judged the potential impact worthwhile.

 

Conference Outcomes

 

At the UU conference, we had around 75 people play the SGG, so around 2% of attendees. Of those, about 65% (just under 50 people) signed up for the newsletter. We had 50 packets with GG descriptions printed, and we ran out by the end of the conference. Additionally, about 70% of the people who played there voted for GiveDirectly.

 

We also had meetings with some notable parties interested in effective giving. Especially promising was a meeting with the Executive Director of the Unitarian Universalist Humanist Association (UUHA), who expressed a strong interest in bringing GGs to her constituents. There are hundreds of UU Humanist groups within congregations around the world. We are currently working on testing a GG at a local UU Humanist group, and we will then write up the results for the UUHA blog. We had some other promising meetings as well, but no one was as interested as the UUHA.

 

At the SSA conference, we had 15 people play the SGG, so around 7.5% of attendees. Of those, 80% signed up for the newsletter, so about 12 people. The same proportion, 80%, voted for GiveDirectly.

 

We gave away around 35 GG packets with descriptions, as some people did not want to play the SGG, but were interested in having their clubs host it. Distributing packets was especially helped by the fact that I was a speaker at the SSA, and promoted and handed out packets at my presentation.

 

The meetings with notable parties proved more promising at the SSA. We met with staff from two national secular organizations, the American Ethical Union and the Center for Inquiry, who expressed an interest in promoting GGs to their members. A number of influencers expressed enthusiasm over the concept of effective giving, and wanted to promote it broadly in the secular/skeptic/humanist movement.

 

Assessment and Conclusion

 

We would have been satisfied at both conferences to have at least half of the people who played the SGG vote for GiveDirectly and have half the people sign up. We ended up with 70% voting for GiveDirectly at UU and 80% at SSA, and 65% signing up for the newsletter at UU and 80% at the SSA. So, these conferences strongly exceeded our baseline expectations. We did not have specific expectations for giving away packets or meetings with notables. Yet looking back, we certainly did not expect the level of interest we got for conference participants holding Giving Games back home - we would have printed more packets for the UU had we thought they might run out.

 

The evidence from GGs shows they are a great method to promote effective giving. Getting influencers from target demographics engaged with GGs not only gets the activists to give more effectively, but also encourages the activists to hold GGs back at their groups.

 

After all, holding GGs is a win-win for secular/skeptic/humanist groups and UU congregations alike. They get to engage in an activity that embodies their values of using reason and evidence. At the same time, they get to improve the world and build a sense of community without spending a penny.

 

For those of us promoting effective giving, it presents these ideas to a new audience, and enables the audience to continue engaging if they wish. The newsletter sign-ups are especially indicative of people’s interests. So are the numbers of people who took packets to host GGs back at their groups. We at InIn already heard from several people who are arranging Giving Games after being exposed to the adapted GG packets, including a UU church that is arranging to have a GG for all 500 members of the church. Based on these outcomes, we at InIn and The Life You Can Save decided it would be even worthwhile to invest into traveling to distant conferences given the right conditions - having a table,  speaking role, potential influencers, etc.

 

So, consider promoting effective giving at conferences to audiences not directly related to existing effective altruism communities. Hopefully, the steps I outlined above will help you decide on the best opportunities to do so. I would be glad to chat with you about specifics and share more details; email me at gleb@intentionalinsights.org.

 

Acknowledgments: For feedback on earlier stages of this draft, my gratitude to Jon Behar, Laura Gamse, Ryan Carey, Malcolm Ocean, Matthijs Maas, Yaacov Tarko, Dony Christie, Jake Krycia, Remmelt Ellen, Alexander Semenychev, Ian Pritchford, Ed Chen, Lune Nekesa, Jo Duyvestyn, and others who wished to remain anonymous.

The map of p-zombies

6 turchin 30 July 2016 09:12AM
No real p-zombies exist in any probable way, but a lot of ideas about them have been suggested. This map is the map of ideas. It may be fun or may be useful.

The most useful application of p-zombies research is to determine whether we could loose something important during uploading.

We have to solve the problem of consciousness before we will be uploaded. It will be the most stupid end of the world: everybody is alive and happy but everybody is p-zombie. 

Most ideas here are from Stanford Encyclopedia of Philosophy, Lesswrong wiki, Rational wiki, recent post of EY and from works of Chalmers and Dennett. Some ideas are mine. 

The pdf is here.


View more: Next