A problem in anthropics with implications for the soundness of the simulation argument.
What are your intuitions about this? It has direct implications for whether the Simulation Argument is sound.
Imagine two rooms, A and B. Between times t1 and t2, 100 trillion people sojourn in room A while 100 billion sojourn in room B. At any given moment, though, exactly 1 person occupies room A while 1,000 people occupy room B. At t2, you find yourself in a room, but you don't know which one. If you have to place a bet on which room it is (at t2), what do you say? Do you consider the time-slice or the history of room occupants? How do you place your bet?
If you bet that you're in room B, then the Simulation Argument may be flawed: there could be a fourth disjunct that Bostrom misses, namely that we become a posthuman civilization that runs a huge number of simulations yet we don't have reason for believing that we're stimulants.
Thoughts?
Cryo with magnetics added
This is great, by using small interlocking magnetic fields, you can keep the water in a higher vibrational state, allowing a "super-cooling" without getting crystallization and cell rupture
Subzero 12-hour Nonfreezing Cryopreservation of Porcine Heart in a Variable Magnetic Field
"invented a special refrigerator, termed as the Cells Alive System (CAS; ABI Co. Ltd., Chiba, Japan). Through the application of a combination of multiple weak energy sources, this refrigerator generates a special variable magnetic field that causes water molecules to oscillate, thus inhibiting crystallization during ice formation18 (Figure 1). Because the entire material is frozen without the movement of water molecules, cells can be maintained intact and free of membranous damage. This refrigerator has the ability to achieve a nonfreezing state even below the solidifying point."
October 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
Open thread, Oct. 03 - Oct. 09, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Open thread, Oct. 17 - Oct. 23, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Open thread, Oct. 10 - Oct. 16, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Open thread, Oct. 24 - Oct. 30, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
The map of agents which may create x-risks
Recently Phil Torres wrote an article where he raises a new topic in existential risks research: the question about who could be possible agents in the creation of a global catastrophe. Here he identifies five main types of agents, and two main reasons why they will create a catastrophe (error and terror).
He discusses the following types of agents:
(1) Superintelligence.
(2) Idiosyncratic actors.
(3) Ecoterrorists.
(4) Religious terrorists.
(5) Rogue states.
Inspired by his work I decided to create a map of all possible agents as well as their possible reasons for creating x-risks. During this work some new ideas appeared.
I think that a significant addition to the list of agents should be superpowers, as they are known to have created most global risks in the 20th century; corporations, as they are now on the front line of AGI creation; and pseudo-rational agents who could create a Doomsday weapon in the future to use for global blackmail (may be with positive values), or who could risk civilization’s fate for their own benefits (dangerous experiments).
The X-risks prevention community could also be an agent of risks if it fails to prevent obvious risks, or if it uses smaller catastrophes to prevent large risks, or if it creates new dangerous ideas of possible risks which could inspire potential terrorists.
The more technology progresses, the more types of agents will have access to dangerous technologies, even including teenagers. (like: "Why This 14-Year-Old Kid Built a Nuclear Reactor” )
In this situation only the number of agents with risky tech will matter, not the exact motivations of each one. But if we are unable to control tech, we could try to control potential agents or their “medium" mood at least.
The map shows various types of agents, starting from non-agents, and ending with types of agential behaviors which could result in catastrophic consequences (error, terror, risk etc). It also shows the types of risks that are more probable for each type of agent. I think that my explanation in each case should be self evident.
We could also show that x-risk agents will change during the pace of technological progress. In the beginning there are no agents, and later there are superpowers, and then smaller and smaller agents, until there will be millions of people with biotech labs at home. In the end there will be only one agent - SuperAI.
So, a lessening the number of agents, and increasing their ”morality” and intelligence seem to be the most plausible directions in lowering risks. Special organizations or social networks may be created to control the most risky type of agents. Differing agents probably need differing types of control. Some ideas of this agent-specific control are listed in the map, but a real control system should be much more complex and specific.
The map shows many agents, some of them real and exist now (but don’t have dangerous capabilities), and some are only possible in moral sense or in technical sense.
So there are 4 types of agents, and I show them in the map in different colours:
1) Existing and dangerous, that is already having technology to destroy the humanity. That is superpowers, arrogant scientists – Red
2) Existing, and willing to end the world, but lacking needed technologies. (ISIS, VHEMt) - Yellow
3) Morally possible, but don’t existing. We could imagine logically consistent value systems which may result in human extinction. That is Doomsday blackmail. - Green
4) Agents, which will pose risk only after supertechnologies appear, like AI-hackers, children biohackers. - Blue
Many agents types are not fit for this classification so I rest them white in the map.
The pdf of the map is here: http://immortality-roadmap.com/agentrisk11.pdf
(The jpg of the map is below because side bar is closing part of it I put it higher)
(The jpg of the map is below because side bar is closing part of it I put it higher)

Counterfactual do-what-I-mean
A putative new idea for AI control; index here.
The counterfactual approach to value learning could be used to possibly allow natural language goals for AIs.
The basic idea is that when the AI is given a natural language goal like "increase human happiness" or "implement CEV", it is not to figure out what these goals mean, but to follow what a pure learning algorithm would establish these goals as meaning.
This would be safer than a simple figure-out-the-utility-you're-currently-maximising approach. But it still doesn't solve a few drawbacks. Firstly, the learning algorithm has to be effective itself (in particular, modifying human understanding of the words should be ruled out, and the learning process must avoid concluding the simpler interpretations are always better). And secondly, humans' don't yet know what these words mean, outside our usual comfort zone, so the "learning" task also involves the AI extrapolating beyond what we know.
Internal Race Conditions
Time start: 14:40:36
I
You might be familiar with the concept of a 'bug', as introduced by CFAR. By using the computer programming analogy, it frames any problem you might have in your life as something fixable... even more - as something to be fixed, something such that fixing it or thinking about how to fix it is the first thing that comes to mind when you see such a problem, or 'bug'.
Let's try another analogy in the same style, with something called 'race conditions' in programming. A race condition as a particular type of bug, that is typically very hard to find and fix ('debug'). It occurs when two or more parts of the same program 'race' to access some data, resource, decision point etc., in a way that is not controlled by any organised principle.
For example, imagine that you have a document open in an editor program. You make some changes, you give a command to save the file. While this operation is in progress, you drag and drop the same file in a file manager, moving to another hard drive. In this case, depending on timing, on the details of the programs, and on the operating system that you are using, you might get different results. The old version of the file might be moved to the new location, while the new one is saved in the old location. Or the file might get saved first, and then moved. Or the saving operation will end in an error, or in a truncated or otherwise malformed file on the disk.
If you know enough details about the situation, you could in fact work out what exactly would happen. But the margin of error in your own handling of the software is so big, that you cannot in practice do this (e.g. you'd need to know the exact milisecond when you press buttons etc.). So in practice, the outcome is random, depending on how the events play out on a scale smaller that you can directly control (e.g. minute differences in timing, strength of reactions etc.).
II
What is the analogy in humans? One of the places in which when you look hard, you'll see this pattern a lot is the relation of emotions and conscious decision making.
E.g., a classic failure mode is a "commitment to emotions", which goes like this:
- I promise to love you forever
- however if I commit to this, I will have doubts and less freedom, which will generate negative emotions
- so I'll attempt to fall in love faster than my doubts grow
- let's do this anyway, why won't we?
The problem here is a typical emotional "race condition": there is a lot of variability in the outcome, depending on how events play out. There could be a "butterfly effect", in which e.g. a single weekend trip together could determine the fate of the relationship, by creating a swing up or down, which would give one side of emotions a head start in the race.
III
Another typical example is making a decision about continuing a relationship:
- when I spend time with you, I like you more
- when I like you more, I want to continue our relationship
- when we have a relationship, I spend more time with you
As you can see, there is a loop in decision process. This cannot possibly end well.
A wild emotional rollercoaster is probably around the least bad outcome of this setup.
IV
So how do you fix race conditions?
By creating structure.
By following principles which compute the result explicitly, without unwanted chaotic behaviour.
By removing loops from decision graphs.
First and foremost, by recognizing that leaving a decision to a race condition is strictly worse than any decision process that we consciously design, even if this process is flipping the coin (at least you know the odds!).
Example: deciding to continue the relationship.
Proposed solution (arrow represent influence):
(1) controlled, long-distance emotional evaluation -> (2) systemic decision -> (3) day-to-day emotions
The idea is to remove the loop by organising emotions into tho groups: those that are directly influenced by the decision or its consequences (3), and more distant "evaluation" emotions (1). A possibility to feel emotions as in (1) can be created by pre-deciding a time to have some time alone and judge the situation from more distance, e.g. "after 6 months of this relationship I will go for a 2 week vacation to by aunt in France, and think about it in a clear-headed way, making sure I consider emotions about the general picture, not day-to-day things like physical affection etc.".
V
There is much to write on this topic, so please excuse my brevity (esp. in the last part, giving some examples of systemic thinking about emotions) - there is easily enough content about this to fill a book (or two). But I hope I gave you some idea.
Time end: 15:15:42
Writing stats: 31 minutes, 23 wpm, 133 cpm
New LW Meetup: Zurich
This summary was posted to LW Main on October 21st. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Munich Meetup in October: 29 October 2016 04:00PM
- Stockholm: Mental contrasting: 21 October 2016 04:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- [Moscow] Games in Kocherga club: FallacyMania, Tower of Chaos, Scientific Discovery: 26 October 2016 07:40PM
- NY Solstice 2016 - The Story of Smallpox: 17 December 2016 06:00PM
- San Francisco Meetup: Stories: 24 October 2016 06:15PM
- Washington, D.C.: Technology of Communication: 23 October 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on September 30th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Melbourne: A Bayesian Guide on How to Read a Scientific Paper: 08 October 2016 03:30PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Philosophical theory with an empirical prediction
I have a philosophical theory which implies some things empirically about quantum physics, and I was wondering if anyone knowledgeable on the subject could give me some insight.
It goes something like this:
As an anathema to reductionists, quarks (and by "quarks" I just mean, whatever are the fundamental particles of the universe) are not governed by simple rules a la conway's game of life, but rather, like all of metaphysics goes into their behavior.
The reductionist basically reduces metaphysics to the simple rules that govern quarks. Fundamentally there is no other identity or causality, everything else is just emergent from that, anything we want to call "real" that we deal with in ordinary experience, does not have any metaphysical identity or causal efficacy of its own, it's just an illusion produced by tons of atoms bouncing around. If the universe is akin to conway's game of life, then I don't think the things we see around us are actually what we think they are. They don't have any real identity on a metaphysical level, but rather they are just patterns of particles in motion, governed by mathematically simple rules.
But suppose there actually is metaphysical identity and causal power in the things around us, well the place I can see for that, is that the unknown rules governing quarks, are not mathematically simple rules, but literally that's where all of metaphysics is contained, quarks entangle together according to high level concepts corresponding to the things we see around us, including a person's identity, and have not the mathematically simple causal powers like conway's game of life, but the causal powers of the identity of the high-level agent.
The empirical question is this: do we observe the fundamental particles of the universe behaving according mathematically simple rules, or do they seem to behave in complex/unpredictable ways depending on how they are entangled / what they are interacting with?
Adding an example to clarify:
The behavior of the quarks corresponds to the identity of the things we see around us. The things we see around us are constituted by quarks - but the question is, are these quarks behaving mindlessly as billiard balls, or is their behavior the result of complex rules corresponding to the identity of the thing they form?
In other words, suppose we're talking about a living ant, are the quarks which constitute that ant behaving according to simple mathematical rules like billiard balls, and the whole concept of there being an "ant" is just an illusion produced by these particles bouncing around, or are these quarks constituting the ant actually behaving "ant-like"?
Is the causal behavior of the ant determined by the billiard-ball interactions of quarks bouncing around, or does the causal behavior actually originate in the identity of the ant, with the quark interactions being decided according to its nature?
What I'm saying is that there metaphysically is such a thing as an ant, when quarks "get together as an ant", they behave differently, they behave ant-like. Given there is a lot of unknown on exactly why quarks behave the way they do, why is this ruled out: that when they "get together as an ant", they behave ant-like?
Basically the idea is, when it comes to the interactions of the quarks constituting the ant with the quarks constituting the things the ant interacts with, the behavior of those interactions is determined not by simple, universal rules of quark behavior, but by the rules of quark behavior that are in effect "when the quarks are an ant".
To further clarify this example:
This is framed in general terms, because I don't actually know any quantum physics, but I'm talking about the fundamental physical particles ("quarks", for lack of a better term), and their behavior at the quantum level - behavior which we don't fully understand. So one could say in general terms, sometimes the quarks "swerve left" and other times they "swerve right", and we don't exactly know why they do that in any given case.
So the question is, suppose the behavior of quarks in general is not determined by simple, universal laws of quark behavior, e.g. "always swerve left 50% of the time", but rather, there are metaphysically real and physically meaningful "quark groups", like if a bunch of quarks are entangled together in a group constituting what we'd observe to be an ant, then quarks in that quark group behave differently. So for example, the quarks in that "ant quark group" might always swerve left when they interact with another quark group of a different kind.
Weekly LW Meetups
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Munich Meetup in October: 29 October 2016 04:00PM
- Stockholm: Bottlenecks to trading personal resources: 11 November 2016 05:15PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Moscow: rational review, status quo bias, interpersonal closeness: 30 October 2016 02:00PM
- NY Solstice 2016 - The Story of Smallpox: 17 December 2016 06:00PM
- San Francisco Meetup: Board Games: 31 October 2016 06:15PM
- Washington, D.C.: Halloween Party: 30 October 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Your Truth Is Not My Truth
Can someone help me dissolve this, and give insight into how to proceed with someone who says this?
What are they saying, exactly? That the set of beliefs in their head that they use to make decisions is not the same set of beliefs that you use to make decisions?
Could I say something like "Yes, that's so, but how do you know that your truth matches what is in the real world? Is there some way to know that your truth isn't only true for you, and not actually true for everybody?"
I'm trying to get a feel for what they mean by "true" in this case, since it's obviously not "matching reality."
Weekly LW Meetups
This summary was posted to LW Main on October 14th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- San Francisco Meetup: Rationality Diary: 17 October 2016 06:15PM
- Washington, D.C.: Fun & Games: 16 October 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on October 7th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Baltimore Area / UMBC Weekly Meetup: 09 October 2016 08:00PM
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Melbourne: A Bayesian Guide on How to Read a Scientific Paper: 08 October 2016 03:30PM
- Moscow: rational review, bias busters, Kolmogorov and Jayes probability: 09 October 2016 02:00PM
- Washington, D.C.: Games Discussion: 09 October 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Trying to find a short story
It's a story about a boy who is into science and transhumanism, and a girl he told about all these crazy things that were going to happen. He dies and all of the things he said started to happen. She ended up floating around Saturn remembering him.
Either he or she was in the wheelchair. He was dying and he was disappointed he was dying because of all the cool stuff that was going to happen that she was going to be around for, and some of it had to do with whatever problem she had that was going to get fixed.
Please help me find this story if you can.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)