Seven Apocalypses
0: Recoverable Catastrophe
An apocalypse is an event that permanently damages the world. This scale is for scenarios that are much worse than any normal disaster. Even if 100 million people die in a war, the rest of the world can eventually rebuild and keep going.
1: Economic Apocalypse
The human carrying capacity of the planet depends on the world's systems of industry, shipping, agriculture, and organizations. If the planet's economic and infrastructural systems were destroyed, then we would have to rely on more local farming, and we could not support as high a population or standard of living. In addition, rebuilding the world economy could be very difficult if the Earth's mineral and fossil fuel resources are already depleted.
2: Communications Apocalypse
If large regions of the Earth become depopulated, or if sufficiently many humans die in the catastrophe, it's possible that regions and continents could be isolated from one another. In this scenario, globalization is reversed by obstacles to long-distance communication and travel. Telecommunications, the internet, and air travel are no longer common. Humans are reduced to multiple, isolated communities.
3: Knowledge Apocalypse
If the loss of human population and institutions is so extreme that a large portion of human cultural or technological knowledge is lost, it could reverse one of the most reliable trends in modern history. Some innovations and scientific models can take millennia to develop from scratch.
4: Human Apocalypse
Even if the human population were to be violently reduced by 90%, it's easy to imagine the survivors slowly resettling the planet, given the resources and opportunity. But a sufficiently extreme transformation of the Earth could drive the human species completely extinct. To many people, this is the worst possible outcome, and any further developments are irrelevant next to the end of human history.
5: Biosphere Apocalypse
In some scenarios (such as the physical destruction of the Earth), one can imagine the extinction not just of humans, but of all known life. Only astrophysical and geological phenomena would be left in this region of the universe. In this timeline we are unlikely to be succeeded by any familiar life forms.
6: Galactic Apocalypse
A rare few scenarios have the potential to wipe out not just Earth, but also all nearby space. This usually comes up in discussions of hostile artificial superintelligence, or very destructive chain reactions of exotic matter. However, the nature of cosmic inflation and extraterrestrial intelligence is still unknown, so it's possible that some phenomenon will ultimately interfere with the destruction.
7: Universal Apocalypse
This form of destruction is thankfully exotic. People discuss the loss of all of existence as an effect of topics like false vacuum bubbles, simulationist termination, solipsistic or anthropic observer effects, Boltzmann brain fluctuations, time travel, or religious eschatology.
The goal of this scale is to give a little more resolution to a speculative, unfamiliar space, in the same sense that the Kardashev Scale provides a little terminology to talk about the distant topic of interstellar civilizations. It can be important in x risk conversations to distinguish between disasters and truly worst-case scenarios. Even if some of these scenarios are unlikely or impossible, they are nevertheless discussed, and terminology can be useful to facilitate conversation.
A Weird Trick To Manage Your Identity
I’ve always been uncomfortable being labeled “American.” Though I’m a citizen of the United States, the term feels restrictive and confining. It obliges me to identify with aspects of the United States with which I am not thrilled. I have similar feelings of limitation with respect to other labels I assume. Some of these labels don’t feel completely true to who I truly am, or impose certain perspectives on me that diverge from my own.
These concerns are why it's useful to keep one's identity small, use identity carefully, and be strategic in choosing your identity.
Yet these pieces speak more to System 1 than to System 2. I recently came up with a weird trick that has made me more comfortable identifying with groups or movements that resonate with me while creating a System 1 visceral identity management strategy. The trick is to simply put the word “weird” before any identity category I think about.
I’m not an “American,” but a “weird American.” Once I started thinking about myself as a “weird American,” I was able to think calmly through which aspects of being American I identified with and which I did not, setting the latter aside from my identity. For example, I used the term “weird American” to describe myself when meeting a group of foreigners, and we had great conversations about what I meant and why I used the term. This subtle change enables my desire to identify with the label “American,” but allows me to separate myself from any aspects of the label I don’t support.
Beyond nationality, I’ve started using the term “weird” in front of other identity categories. For example, I'm a professor at Ohio State. I used to become deeply frustrated when students didn’t prepare adequately for their classes with me. No matter how hard I tried, or whatever clever tactics I deployed, some students simply didn’t care. Instead of allowing that situation to keep bothering me, I started to think of myself as a “weird professor” - one who set up an environment that helped students succeed, but didn’t feel upset and frustrated by those who failed to make the most of it.
I’ve been applying the weird trick in my personal life, too. Thinking of myself as a “weird son” makes me feel more at ease when my mother and I don’t see eye-to-eye; thinking of myself as a “weird nice guy,” rather than just a nice guy, has helped me feel confident about my decisions to be firm when the occasion calls for it.
So, why does this weird trick work? It’s rooted in strategies of reframing and distancing, two research-based methods for changing our thought frameworks. Reframing involves changing one’s framework of thinking about a topic in order to create more beneficial modes of thinking. For instance, in reframing myself as a weird nice guy, I have been able to say “no” to requests people make of me, even though my intuitive nice guy tendency tells me I should say “yes.” Distancing refers to a method of emotional management through separating oneself from an emotionally tense situation and observing it from a third-person, external perspective. Thus, if I think of myself as a weird son, I don’t have nearly as much negative emotions during conflicts with my mom. It enables me to have space for calm and sound decision-making.
Thinking of myself as "weird" also applies to the context of rationality and effective altruism for me. Thinking of myself as a "weird" aspiring rationalist and EA helps me be more calm and at ease when I encounter criticisms of my approach to promoting rational thinking and effective giving. I can distance myself from the criticism better, and see what I can learn from the useful points in the criticism to update and be stronger going forward.
Overall, using the term “weird” before any identity category has freed me from confinements and restrictions associated with socially-imposed identity labels and allowed me to pick and choose which aspects of these labels best serve my own interests and needs. I hope being “weird” can help you manage your identity better as well!
Open thread, Sep. 19 - Sep. 25, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Weekly LW Meetups
This summary was posted to LW Main on September 16th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Baltimore Area / UMBC Weekly Meetup: 18 September 2016 07:00PM
- Munich Meetup in September: 17 September 2016 04:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- [Moscow] Role playing game based on HPMOR in Moscow: 17 September 2016 03:00PM
- Moscow: keynote for the first Kocherga year, text analysis, rationality applications discussion: 18 September 2016 02:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Washington, D.C.: Steelmanning: 18 September 2016 03:30PM
- Vienna: 24 September 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Why we may elect our new AI overlords
In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.
http://pirate.london/2016/09/why-we-may-elect-our-new-ai-overlords/
Rationality Quotes September 2016
Another month, another rationality quotes thread. The rules are:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
New Pascal's Mugging idea for potential solution
I'll keep this quick:
In general, the problem presented by the Mugging is this: As we examine the utility of a given act for each possible world we could be in, in order from most probable to least probable, the utilities can grow much faster than the probabilities shrink. Thus it seems that the standard maxim "Maximize expected utility" is impossible to carry out, since there is no such maximum. When we go down the list of hypotheses multiplying the utility of the act on that hypothesis, by the probability of that hypothesis, the result does not converge to anything.
Here's an idea that may fix this:
For every possible world W of complexity N, there's another possible world of complexity N+c that's just like W, except that it has two parallel, identical universes instead of just one. (If it matters, suppose that they are connected by an extra dimension.) (If this isn't obvious, say so and I can explain.)
Moreover, there's another possible world of complexity N+c+1 that's just like W except that it has four such parallel identical universes.
And a world of complexity N+c+X that has R parallel identical universes, where R is the largest number that can be specified in X bits of information.
So, take any given extreme mugger hypothesis like "I'm a matrix lord who will kill 3^^^^3 people if you don't give me $5." Uncontroversially, the probability of this hypothesis will be something much smaller than the probability of the default hypothesis. Let's be conservative and say the ratio is 1 in a billion.
(Here's the part I'm not so confident in)
Translating that into hypotheses with complexity values, that means that the mugger hypothesis has about 30 more bits of information in it than the default hypothesis.
So, assuming c is small (and actually I think this assumption can be done away with) there's another hypothesis, equally likely to the Mugger hypothesis, which is that you are in a duplicate universe that is exactly like the universe in the default hypothesis, except with R duplicates, where R is the largest number we can specify in 30 bits.
That number is very large indeed. (See the Busy Beaver function.) My guess is that it's going to be way way way larger than 3^^^^3. (It takes less than 30 bits to specify 3^^^^3, no?)
So this isn't exactly a formal solution yet, but it seems like it might be on to something. Perhaps our expected utility converges after all.
Thoughts?
(I'm very confused about all this which is why I'm posting it in the first place.)
Rationality Quotes August 2016
Another month, another rationality quotes thread. The rules are:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Clean work gets dirty with time
Edited for clarity (hopefully) with thanks to Squirrell_in_Hell.
Lately, I find myself more and more interested in how the concept of "systematized winning" can be applied to large groups of people who have one thing in common, and that not even time, but a hobby or a general interest in a specific discipline. It doesn't seem (to me) to much trouble people working on their own individual qualities - performers, martial artists, managers (who would self-identify as belonging to these sets), but I am basing this on "general impressions" and will be glad to be corrected. It does seem to be a norm for some other sets, like sailors, who keep correcting maps every voyage.
The field in which I have been for some years (botany) does have something similar to what sailors do, which lets us to see how floras change over time, etc. However, different questions arise when novel sub-disciplines branch off the main trunk, and naturally, the people asking these new questions keep reaching back for some kind of pre-existing observations. And often they don't check how much weight can be assigned to these observations, which, I think, is a bad habit that won't lead to "winning".
It is not "industrial rationality" per se, but a distantly related thing, and I think we might have to recognize it somehow. Or at least, recognize that it requires different assumptions... No set victory, for example... Still, it probably matters to more living people than pure "industrial rationality" does, & ignoring it won't make it go away.
Counterfactual do-what-I-mean
A putative new idea for AI control; index here.
The counterfactual approach to value learning could be used to possibly allow natural language goals for AIs.
The basic idea is that when the AI is given a natural language goal like "increase human happiness" or "implement CEV", it is not to figure out what these goals mean, but to follow what a pure learning algorithm would establish these goals as meaning.
This would be safer than a simple figure-out-the-utility-you're-currently-maximising approach. But it still doesn't solve a few drawbacks. Firstly, the learning algorithm has to be effective itself (in particular, modifying human understanding of the words should be ruled out, and the learning process must avoid concluding the simpler interpretations are always better). And secondly, humans' don't yet know what these words mean, outside our usual comfort zone, so the "learning" task also involves the AI extrapolating beyond what we know.
Internal Race Conditions
Time start: 14:40:36
I
You might be familiar with the concept of a 'bug', as introduced by CFAR. By using the computer programming analogy, it frames any problem you might have in your life as something fixable... even more - as something to be fixed, something such that fixing it or thinking about how to fix it is the first thing that comes to mind when you see such a problem, or 'bug'.
Let's try another analogy in the same style, with something called 'race conditions' in programming. A race condition as a particular type of bug, that is typically very hard to find and fix ('debug'). It occurs when two or more parts of the same program 'race' to access some data, resource, decision point etc., in a way that is not controlled by any organised principle.
For example, imagine that you have a document open in an editor program. You make some changes, you give a command to save the file. While this operation is in progress, you drag and drop the same file in a file manager, moving to another hard drive. In this case, depending on timing, on the details of the programs, and on the operating system that you are using, you might get different results. The old version of the file might be moved to the new location, while the new one is saved in the old location. Or the file might get saved first, and then moved. Or the saving operation will end in an error, or in a truncated or otherwise malformed file on the disk.
If you know enough details about the situation, you could in fact work out what exactly would happen. But the margin of error in your own handling of the software is so big, that you cannot in practice do this (e.g. you'd need to know the exact milisecond when you press buttons etc.). So in practice, the outcome is random, depending on how the events play out on a scale smaller that you can directly control (e.g. minute differences in timing, strength of reactions etc.).
II
What is the analogy in humans? One of the places in which when you look hard, you'll see this pattern a lot is the relation of emotions and conscious decision making.
E.g., a classic failure mode is a "commitment to emotions", which goes like this:
- I promise to love you forever
- however if I commit to this, I will have doubts and less freedom, which will generate negative emotions
- so I'll attempt to fall in love faster than my doubts grow
- let's do this anyway, why won't we?
The problem here is a typical emotional "race condition": there is a lot of variability in the outcome, depending on how events play out. There could be a "butterfly effect", in which e.g. a single weekend trip together could determine the fate of the relationship, by creating a swing up or down, which would give one side of emotions a head start in the race.
III
Another typical example is making a decision about continuing a relationship:
- when I spend time with you, I like you more
- when I like you more, I want to continue our relationship
- when we have a relationship, I spend more time with you
As you can see, there is a loop in decision process. This cannot possibly end well.
A wild emotional rollercoaster is probably around the least bad outcome of this setup.
IV
So how do you fix race conditions?
By creating structure.
By following principles which compute the result explicitly, without unwanted chaotic behaviour.
By removing loops from decision graphs.
First and foremost, by recognizing that leaving a decision to a race condition is strictly worse than any decision process that we consciously design, even if this process is flipping the coin (at least you know the odds!).
Example: deciding to continue the relationship.
Proposed solution (arrow represent influence):
(1) controlled, long-distance emotional evaluation -> (2) systemic decision -> (3) day-to-day emotions
The idea is to remove the loop by organising emotions into tho groups: those that are directly influenced by the decision or its consequences (3), and more distant "evaluation" emotions (1). A possibility to feel emotions as in (1) can be created by pre-deciding a time to have some time alone and judge the situation from more distance, e.g. "after 6 months of this relationship I will go for a 2 week vacation to by aunt in France, and think about it in a clear-headed way, making sure I consider emotions about the general picture, not day-to-day things like physical affection etc.".
V
There is much to write on this topic, so please excuse my brevity (esp. in the last part, giving some examples of systemic thinking about emotions) - there is easily enough content about this to fill a book (or two). But I hope I gave you some idea.
Time end: 15:15:42
Writing stats: 31 minutes, 23 wpm, 133 cpm
New LW Meetup: Zurich
This summary was posted to LW Main on October 21st. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Munich Meetup in October: 29 October 2016 04:00PM
- Stockholm: Mental contrasting: 21 October 2016 04:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- [Moscow] Games in Kocherga club: FallacyMania, Tower of Chaos, Scientific Discovery: 26 October 2016 07:40PM
- NY Solstice 2016 - The Story of Smallpox: 17 December 2016 06:00PM
- San Francisco Meetup: Stories: 24 October 2016 06:15PM
- Washington, D.C.: Technology of Communication: 23 October 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on September 30th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Melbourne: A Bayesian Guide on How to Read a Scientific Paper: 08 October 2016 03:30PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on September 23rd. The following week's summary is here.
The following meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Baltimore Area / UMBC Weekly Meetup: 25 September 2016 07:00PM
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- [Moscow] Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos: 28 September 2016 07:40PM
- San Francisco Meetup: Mini Talks: 26 September 2016 06:15PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Vienna: 24 September 2016 03:00PM
- Washington, D.C.: Outdoor Fun & Games: 25 September 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Open thread, Sep. 12 - Sep. 18, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe
"The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties."
“For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order.” These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones."
Interesting article, and just diving into the paper now, but it looks like this is a big boost to the simulation argument. If the universe is built like a game engine, with stacked sets like Mandelbrots, then the simplicity itself becomes a driver in a fabricated reality.
Why does deep and cheap learning work so well?
Risks from Approximate Value Learning
Solving the value learning problem is (IMO) the key technical challenge for AI safety.
How good or bad is an approximate solution?
EDIT for clarity:
By "approximate value learning" I mean something which does a good (but suboptimal from the perspective of safety) job of learning values. So it may do a good enough job of learning values to behave well most of the time, and be useful for solving tasks, but it still has a non-trivial chance of developing dangerous instrumental goals, and is hence an Xrisk.
Considerations:
1. How would developing good approximate value learning algorithms effect AI research/deployment?
It would enable more AI applications. For instance, many many robotics tasks such as "smooth grasping motion" are difficult to manually specify a utility function for. This could have positive or negative effects:
Positive:
* It could encourage more mainstream AI researchers to work on value-learning.
Negative:
* It could encourage more mainstream AI developers to use reinforcement learning to solve tasks for which "good-enough" utility functions can be learned.
Consider a value-learning algorithm which is "good-enough" to learn how to perform complicated, ill-specified tasks (e.g. folding a towel). But it's still not quite perfect, and so every second, there is a 1/100,000,000 chance that it decides to take over the world. A robot using this algorithm would likely pass a year-long series of safety tests and seem like a viable product, but would be expected to decide to take over the world in ~3 years.
Without good-enough value learning, these tasks might just not be solved, or might be solved with safer approaches involving more engineering and less performance, e.g. using a collection of supervised learning modules and hand-crafted interfaces/heuristics.
2. What would a partially aligned AI do?
An AI programmed with an approximately correct value function might fail
* dramatically (see, e.g. Eliezer, on AIs "tiling the solar system with tiny smiley faces.")
or
* relatively benignly (see, e.g. my example of an AI that doesn't understand gustatory pleasure)
Perhaps a more significant example of benign partial-alignment would be an AI that has not learned all human values, but is corrigible and handles its uncertainty about its utility in a desirable way.
Weekly LW Meetups
This summary was posted to LW Main on August 19th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Australian-ish Online Hangout: 20 August 2016 07:30PM
- Baltimore Area Weekly Meetup: 21 August 2016 08:00PM
- European Community Weekend: 02 September 2016 03:35PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- [Moscow] Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos: 24 August 2016 07:40PM
- San Jose Meetup: Park Day (VII): 21 August 2016 03:00PM
- Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Vienna: 20 August 2016 03:00PM
- Washington, D.C.: Mini Talks: 21 August 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Avoiding collapse: Grand challenges for science and society to solve by 2050
"We maintain that humanity’s grand challenge is solving the intertwined problems of human population growth and overconsumption, climate change, pollution, ecosystem destruction, disease spillovers, and extinction, in order to avoid environmental tipping points that would make human life more difficult and would irrevocably damage planetary life support systems."
pdf onsite
Advice to new Doctors starting practice
Hi all,
Please read the Disclaimers at the end of the post first, if you're easily offended.
Generalists(general medicine):
- Get extremely unbeatable at 20 Questions(rationality link). It'll help you make your initial diagnoses(ones based on questions about symptoms) faster and more accurate.
- Understand probability, bayes theorem and how to apply it** This will help you interpret the test results, you ordered based on the 20 questions.
- Understand base rate fallacy, and how to avoid being over confident.
- Understand the upsides and downsides of the drugs you prescribe. Know the probabilities of fatal and adverse side-effects and update them with evidence(Bayes' theorem mentioned above) as you try out different brands and combinations.
- Know the costs and benefits of any treatment and help the patient make a good decision based on the cost-benefit analysis of treatment combined with the probabilities of outcome.
- Ask and Keep a history of medical records and allergies of the patient and till their grand parents.*
- Be willing and able to judge, when a patient is better off with a specialist. Try to keep in touch with Doctors nearby and hopeful all types of specialists.
- Explain the treatment options and pros and cons in easy language to the patients. It'll reduce misunderstandings and eventually dis-satisfaction with the treatment.
- Resist the urge to treat patients as NPCs. Involve them in the treatment process.
- Meditate
- Find a hobby, that you can keep improving on till the end of life.
- Be aware of the conflict of interest between the patient and the pharmaceutical companies.
- Have enough research skills to form opinions on base rates/probabilities in different diseases and treatment methods as needed.
- If you're in a big hospital setup, make sure you've the best hospital administration.
- Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.Scheduling, filing and communication. Lacking these, medical expertise is meaningless.
Specialists:
Basically the same skill sets as above. One difference is in the skill level and you should customize that as needed.
- For ex: You would need to be able to explain the treatment options and the probabilistic nature of the outcomes to your patients.
- As for research, keep a track of progress in your area in treatment methods and different outcomes on the "quality of life" for the patients after the treatment.
- Better applied Bayesian skills. In the sense of figuring out independent variables and their probabilities affecting the outcome.
Some controversial ideas(Better use your common-sense before trying out):
- Experiment a little with your bio-chemistry and see how they affect your thought-processes. To be safe, stick to biologically produced ones. For ex: injecting self with a small adrenalin dose and monitoring bodily response can help keep your thinking clear in emergency situations.
- Know your self biology better. For ex: male vs female differences mean the adrenalin response is different and peaks later in females. If you think that's wrong, please go back and check your course work. Also watch this 2 hour video and come back with objections after reading the studies he quotes.
- Keep regularly(whatever frequency your practice and nature of work demands) checking your(for ex; hormone levels) blood states, so that you can start regulating your self for optimal decision-making skills.
- If you're a woman, you'll customize practice on some of the skill sets above differently. For ex: Mastery over emotions might need more practice, while empathizing/connecting with the patient might be easier.
Disclaimers:
- Most of what follows is based on my experiences(either as a patient myself or a concerned relative) with Indian Doctors. Some of it may be trivial, to others, but most of it is skills a doc will need and ignored in school.
- I've split it in two (specialists and generalists) but there's a fair amount of overlap.
- These are fairly high standards, but worth shooting for and I've kept the focus on smart rather than hard work.
- I've stayed from a few topics like: bedside manners/social skills, specific medical treatments and conditions(obviously, I'm not a Doctor after all) and a few others, you can add/delete(also specify/pick levels) as you see fit.
- Pick the skill-levels as demanded by your client population and adjust.
- I'm assuming generalists, don't have to deal with emergency cases, but in some parts, that's not likely then pick common emergency areas and follow specialist advice.
- I wrote this based on my experiences and with humans in mind, but veterinary Doctors may find some useful too.
* -- I understand this is difficult in Indian circumstances, but I've seen it being done manually(simply leaves of prescriptions organized alphabetically, link to dr.rathinavel) , so it's possible and worth the effort unless, you practice in area of highly migratory population.(for example rural vs urban areas).
**-- If you're trying to compete on availability for consultation, you'll need to be able to do this after being woken in the middle of the night.
I'm hoping to convert it into a rationalist skills for Doctors Wiki page, so please provide feedback, especially if you're practicing Doctors. If you don't want to post publicly email me(in profile) or comment on wordpress.
August 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
Weekly LW Meetups
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Munich Meetup in October: 29 October 2016 04:00PM
- Stockholm: Bottlenecks to trading personal resources: 11 November 2016 05:15PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Moscow: rational review, status quo bias, interpersonal closeness: 30 October 2016 02:00PM
- NY Solstice 2016 - The Story of Smallpox: 17 December 2016 06:00PM
- San Francisco Meetup: Board Games: 31 October 2016 06:15PM
- Washington, D.C.: Halloween Party: 30 October 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Your Truth Is Not My Truth
Can someone help me dissolve this, and give insight into how to proceed with someone who says this?
What are they saying, exactly? That the set of beliefs in their head that they use to make decisions is not the same set of beliefs that you use to make decisions?
Could I say something like "Yes, that's so, but how do you know that your truth matches what is in the real world? Is there some way to know that your truth isn't only true for you, and not actually true for everybody?"
I'm trying to get a feel for what they mean by "true" in this case, since it's obviously not "matching reality."
Weekly LW Meetups
This summary was posted to LW Main on October 14th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- San Francisco Meetup: Rationality Diary: 17 October 2016 06:15PM
- Washington, D.C.: Fun & Games: 16 October 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on October 7th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Baltimore Area / UMBC Weekly Meetup: 09 October 2016 08:00PM
- Bay Area Winter Solstice 2016: 17 December 2016 07:00PM
- Melbourne: A Bayesian Guide on How to Read a Scientific Paper: 08 October 2016 03:30PM
- Moscow: rational review, bias busters, Kolmogorov and Jayes probability: 09 October 2016 02:00PM
- Washington, D.C.: Games Discussion: 09 October 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Problems with learning values from observation
I dunno if this has been discussed elsewhere (pointers welcome).
Observational data doesn't allow one to distinguish correlation and causation.
This is a problem for an agent attempting to learn values without being allowed to make interventions.
For example, suppose that happiness is just a linear function of how much Utopamine is in a person's brain.
If a person smiles only when their Utopamine concentration is above 3 ppm, then an value-learner which observes both someone's Utopamine levels and facial expression and tries to predict their reported happiness on the basis of these features will notice that smiling is correlated with higher levels of reported happiness and thus erroneously believe that it is partially responsible for the happiness.
------------------
an IMPLICATION:
I have a picture of value learning where the AI learns via observation (since we don't want to give an unaligned AI access to actuators!).
But this makes it seem important to consider how to make an un unaligned AI safe-enough to perform value-learning relevant interventions.
Weekly LW Meetups
This summary was posted to LW Main on September 9th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Melbourne: Lightning talks: 10 September 2016 03:30PM
- Moscow LW meetup in "Nauchka" library: 09 September 2016 07:50PM
- [Moscow] Role playing game based on HPMOR in Moscow: 17 September 2016 03:00PM
- San Jose Meetup: Park Day (VIII): 11 September 2016 03:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- [Tel Aviv] NLP for large scale sentiment detection: 13 September 2016 07:00PM
- Washington, D.C.: Singing: 11 September 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on September 2nd. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Baltimore Area Weekly Meetup: 04 September 2016 08:00PM
- Melbourne: Social dinner : 03 September 2016 06:00PM
- Moscow LW meetup in "Nauchka" library: 09 September 2016 07:50PM
- Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Washington, D.C.: Fun & Games: 04 September 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
Weekly LW Meetups
This summary was posted to LW Main on August 26th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Baltimore Area Weekly Meetup: 28 August 2016 08:00PM
- European Community Weekend: 02 September 2016 03:35PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Moscow: rationalist culture, applied consequentialism, Stanovich: 28 August 2016 02:00PM
- Moscow LW meetup in "Nauchka" library: 09 September 2016 07:50PM
- Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
- Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
- Washington, D.C.: Legos: 28 August 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)