Filter Last three months

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Seven Apocalypses

2 scarcegreengrass 20 September 2016 02:59AM

0: Recoverable Catastrophe

An apocalypse is an event that permanently damages the world. This scale is for scenarios that are much worse than any normal disaster. Even if 100 million people die in a war, the rest of the world can eventually rebuild and keep going.


1: Economic Apocalypse

The human carrying capacity of the planet depends on the world's systems of industry, shipping, agriculture, and organizations. If the planet's economic and infrastructural systems were destroyed, then we would have to rely on more local farming, and we could not support as high a population or standard of living. In addition, rebuilding the world economy could be very difficult if the Earth's mineral and fossil fuel resources are already depleted.


2: Communications Apocalypse

If large regions of the Earth become depopulated, or if sufficiently many humans die in the catastrophe, it's possible that regions and continents could be isolated from one another. In this scenario, globalization is reversed by obstacles to long-distance communication and travel. Telecommunications, the internet, and air travel are no longer common. Humans are reduced to multiple, isolated communities.


3: Knowledge Apocalypse

If the loss of human population and institutions is so extreme that a large portion of human cultural or technological knowledge is lost, it could reverse one of the most reliable trends in modern history. Some innovations and scientific models can take millennia to develop from scratch.


4: Human Apocalypse

Even if the human population were to be violently reduced by 90%, it's easy to imagine the survivors slowly resettling the planet, given the resources and opportunity. But a sufficiently extreme transformation of the Earth could drive the human species completely extinct. To many people, this is the worst possible outcome, and any further developments are irrelevant next to the end of human history.

 

5: Biosphere Apocalypse

In some scenarios (such as the physical destruction of the Earth), one can imagine the extinction not just of humans, but of all known life. Only astrophysical and geological phenomena would be left in this region of the universe. In this timeline we are unlikely to be succeeded by any familiar life forms.


6: Galactic Apocalypse

A rare few scenarios have the potential to wipe out not just Earth, but also all nearby space. This usually comes up in discussions of hostile artificial superintelligence, or very destructive chain reactions of exotic matter. However, the nature of cosmic inflation and extraterrestrial intelligence is still unknown, so it's possible that some phenomenon will ultimately interfere with the destruction.


7: Universal Apocalypse

This form of destruction is thankfully exotic. People discuss the loss of all of existence as an effect of topics like false vacuum bubbles, simulationist termination, solipsistic or anthropic observer effects, Boltzmann brain fluctuations, time travel, or religious eschatology.


The goal of this scale is to give a little more resolution to a speculative, unfamiliar space, in the same sense that the Kardashev Scale provides a little terminology to talk about the distant topic of interstellar civilizations. It can be important in x risk conversations to distinguish between disasters and truly worst-case scenarios. Even if some of these scenarios are unlikely or impossible, they are nevertheless discussed, and terminology can be useful to facilitate conversation.

A Weird Trick To Manage Your Identity

2 Gleb_Tsipursky 19 September 2016 07:13PM

I’ve always been uncomfortable being labeled “American.” Though I’m a citizen of the United States, the term feels restrictive and confining. It obliges me to identify with aspects of the United States with which I am not thrilled. I have similar feelings of limitation with respect to other labels I assume. Some of these labels don’t feel completely true to who I truly am, or impose certain perspectives on me that diverge from my own.

 

These concerns are why it's useful to keep one's identity small, use identity carefully, and be strategic in choosing your identity.

 

Yet these pieces speak more to System 1 than to System 2. I recently came up with a weird trick that has made me more comfortable identifying with groups or movements that resonate with me while creating a System 1 visceral identity management strategy. The trick is to simply put the word “weird” before any identity category I think about.

 

I’m not an “American,” but a “weird American.” Once I started thinking about myself as a “weird American,” I was able to think calmly through which aspects of being American I identified with and which I did not, setting the latter aside from my identity. For example, I used the term “weird American” to describe myself when meeting a group of foreigners, and we had great conversations about what I meant and why I used the term. This subtle change enables my desire to identify with the label “American,” but allows me to separate myself from any aspects of the label I don’t support.

 

Beyond nationality, I’ve started using the term  “weird” in front of other identity categories. For example, I'm a professor at Ohio State. I used to become deeply  frustrated when students didn’t prepare adequately  for their classes with me. No matter how hard I tried, or whatever clever tactics I deployed, some students simply didn’t care. Instead of allowing that situation to keep bothering me, I started to think of myself as a “weird professor” - one who set up an environment that helped students succeed, but didn’t feel upset and frustrated by those who failed to make the most of it.

 

I’ve been applying the weird trick in my personal life, too. Thinking of myself as a “weird son” makes me feel more at ease when my mother and I don’t see eye-to-eye; thinking of myself as a “weird nice guy,” rather than just a nice guy, has helped me feel confident about my decisions to be firm when the occasion calls for it.

 

So, why does this weird trick work? It’s rooted in strategies of reframing and distancing, two research-based methods for changing our thought frameworks. Reframing involves changing one’s framework of thinking about a topic in order to create more beneficial modes of thinking. For instance, in reframing myself as a weird nice guy, I have been able to say “no” to requests people make of me, even though my intuitive nice guy tendency tells me I should say “yes.” Distancing refers to a method of emotional management through separating oneself from an emotionally tense situation and observing it from a third-person, external perspective. Thus, if I think of myself as a weird son, I don’t have nearly as much negative emotions during conflicts with my mom. It enables me to have space for calm and sound decision-making.

 

Thinking of myself as "weird" also applies to the context of rationality and effective altruism for me. Thinking of myself as a "weird" aspiring rationalist and EA helps me be more calm and at ease when I encounter criticisms of my approach to promoting rational thinking and effective giving. I can distance myself from the criticism better, and see what I can learn from the useful points in the criticism to update and be stronger going forward.

 

Overall, using the term “weird” before any identity category has freed me from confinements and restrictions associated with socially-imposed identity labels and allowed me to pick and choose which aspects of these labels best serve my own interests and needs. I hope being “weird” can help you manage your identity better as well!

Open thread, Sep. 19 - Sep. 25, 2016

2 DataPacRat 19 September 2016 06:34PM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Weekly LW Meetups

2 FrankAdamek 16 September 2016 03:51PM

This summary was posted to LW Main on September 16th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Why we may elect our new AI overlords

2 Deku-shrub 04 September 2016 01:07AM

In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.

http://pirate.london/2016/09/why-we-may-elect-our-new-ai-overlords/

Rationality Quotes September 2016

2 bbleeker 02 September 2016 06:44AM

Another month, another rationality quotes thread. The rules are:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
  • Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

New Pascal's Mugging idea for potential solution

2 kokotajlod 04 August 2016 08:38PM

I'll keep this quick:

In general, the problem presented by the Mugging is this: As we examine the utility of a given act for each possible world we could be in, in order from most probable to least probable, the utilities can grow much faster than the probabilities shrink. Thus it seems that the standard maxim "Maximize expected utility" is impossible to carry out, since there is no such maximum. When we go down the list of hypotheses multiplying the utility of the act on that hypothesis, by the probability of that hypothesis, the result does not converge to anything. 

Here's an idea that may fix this:

For every possible world W of complexity N, there's another possible world of complexity N+c that's just like W, except that it has two parallel, identical universes instead of just one. (If it matters, suppose that they are connected by an extra dimension.) (If this isn't obvious, say so and I can explain.)

Moreover, there's another possible world of complexity N+c+1 that's just like W except that it has four such parallel identical universes.

And a world of complexity N+c+X that has R parallel identical universes, where R is the largest number that can be specified in X bits of information. 

So, take any given extreme mugger hypothesis like "I'm a matrix lord who will kill 3^^^^3 people if you don't give me $5." Uncontroversially, the probability of this hypothesis will be something much smaller than the probability of the default hypothesis. Let's be conservative and say the ratio is 1 in a billion. 

(Here's the part I'm not so confident in)

Translating that into hypotheses with complexity values, that means that the mugger hypothesis has about 30 more bits of information in it than the default hypothesis. 

So, assuming c is small (and actually I think this assumption can be done away with) there's another hypothesis, equally likely to the Mugger hypothesis, which is that you are in a duplicate universe that is exactly like the universe in the default hypothesis, except with R duplicates, where R is the largest number we can specify in 30 bits.

That number is very large indeed. (See the Busy Beaver function.) My guess is that it's going to be way way way larger than 3^^^^3. (It takes less than 30 bits to specify 3^^^^3, no?)

So this isn't exactly a formal solution yet, but it seems like it might be on to something. Perhaps our expected utility converges after all.

Thoughts?

(I'm very confused about all this which is why I'm posting it in the first place.)

 

Rationality Quotes August 2016

2 bbleeker 01 August 2016 09:32AM

Another month, another rationality quotes thread. The rules are:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
  • Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

Clean work gets dirty with time

2 Romashka 31 July 2016 07:59PM
Edited for clarity (hopefully) with thanks to Squirrell_in_Hell.

Lately, I find myself more and more interested in how the concept of "systematized winning" can be applied to large groups of people who have one thing in common, and that not even time, but a hobby or a general interest in a specific discipline. It doesn't seem (to me) to much trouble people working on their own individual qualities - performers, martial artists, managers (who would self-identify as belonging to these sets), but I am basing this on "general impressions" and will be glad to be corrected. It does seem to be a norm for some other sets, like sailors, who keep correcting maps every voyage.

The field in which I have been for some years (botany) does have something similar to what sailors do, which lets us to see how floras change over time, etc. However, different questions arise when novel sub-disciplines branch off the main trunk, and naturally, the people asking these new questions keep reaching back for some kind of pre-existing observations. And often they don't check how much weight can be assigned to these observations, which, I think, is a bad habit that won't lead to "winning".

It is not "industrial rationality" per se, but a distantly related thing, and I think we might have to recognize it somehow. Or at least, recognize that it requires different assumptions... No set victory, for example... Still, it probably matters to more living people than pure "industrial rationality" does, & ignoring it won't make it go away.

continue reading »

Counterfactual do-what-I-mean

1 Stuart_Armstrong 27 October 2016 01:54PM

A putative new idea for AI control; index here.

The counterfactual approach to value learning could be used to possibly allow natural language goals for AIs.

The basic idea is that when the AI is given a natural language goal like "increase human happiness" or "implement CEV", it is not to figure out what these goals mean, but to follow what a pure learning algorithm would establish these goals as meaning.

This would be safer than a simple figure-out-the-utility-you're-currently-maximising approach. But it still doesn't solve a few drawbacks. Firstly, the learning algorithm has to be effective itself (in particular, modifying human understanding of the words should be ruled out, and the learning process must avoid concluding the simpler interpretations are always better). And secondly, humans' don't yet know what these words mean, outside our usual comfort zone, so the "learning" task also involves the AI extrapolating beyond what we know.

Internal Race Conditions

1 SquirrelInHell 23 October 2016 01:23PM

Time start: 14:40:36

I

You might be familiar with the concept of a 'bug', as introduced by CFAR. By using the computer programming analogy, it frames any problem you might have in your life as something fixable... even more - as something to be fixed, something such that fixing it or thinking about how to fix it is the first thing that comes to mind when you see such a problem, or 'bug'.

Let's try another analogy in the same style, with something called 'race conditions' in programming. A race condition as a particular type of bug, that is typically very hard to find and fix ('debug'). It occurs when two or more parts of the same program 'race' to access some data, resource, decision point etc., in a way that is not controlled by any organised principle.

For example, imagine that you have a document open in an editor program. You make some changes, you give a command to save the file. While this operation is in progress, you drag and drop the same file in a file manager, moving to another hard drive. In this case, depending on timing, on the details of the programs, and on the operating system that you are using, you might get different results. The old version of the file might be moved to the new location, while the new one is saved in the old location. Or the file might get saved first, and then moved. Or the saving operation will end in an error, or in a truncated or otherwise malformed file on the disk.

If you know enough details about the situation, you could in fact work out what exactly would happen. But the margin of error in your own handling of the software is so big, that you cannot in practice do this (e.g. you'd need to know the exact milisecond when you press buttons etc.). So in practice, the outcome is random, depending on how the events play out on a scale smaller that you can directly control (e.g. minute differences in timing, strength of reactions etc.).

II

What is the analogy in humans? One of the places in which when you look hard, you'll see this pattern a lot is the relation of emotions and conscious decision making.

E.g., a classic failure mode is a "commitment to emotions", which goes like this:

  • I promise to love you forever
  • however if I commit to this, I will have doubts and less freedom, which will generate negative emotions
  • so I'll attempt to fall in love faster than my doubts grow
  • let's do this anyway, why won't we?

The problem here is a typical emotional "race condition": there is a lot of variability in the outcome, depending on how events play out. There could be a "butterfly effect", in which e.g. a single weekend trip together could determine the fate of the relationship, by creating a swing up or down, which would give one side of emotions a head start in the race.

III

Another typical example is making a decision about continuing a relationship:

  • when I spend time with you, I like you more
  • when I like you more, I want to continue our relationship
  • when we have a relationship, I spend more time with you

As you can see, there is a loop in decision process. This cannot possibly end well.

A wild emotional rollercoaster is probably around the least bad outcome of this setup.

IV

So how do you fix race conditions?

By creating structure.

By following principles which compute the result explicitly, without unwanted chaotic behaviour.

By removing loops from decision graphs.

First and foremost, by recognizing that leaving a decision to a race condition is strictly worse than any decision process that we consciously design, even if this process is flipping the coin (at least you know the odds!).

Example: deciding to continue the relationship.

Proposed solution (arrow represent influence):

(1) controlled, long-distance emotional evaluation -> (2) systemic decision -> (3) day-to-day emotions

The idea is to remove the loop by organising emotions into tho groups: those that are directly influenced by the decision or its consequences (3), and more distant "evaluation" emotions (1). A possibility to feel emotions as in (1) can be created by pre-deciding a time to have some time alone and judge the situation from more distance, e.g. "after 6 months of this relationship I will go for a 2 week vacation to by aunt in France, and think about it in a clear-headed way, making sure I consider emotions about the general picture, not day-to-day things like physical affection etc.".

V

There is much to write on this topic, so please excuse my brevity (esp. in the last part, giving some examples of systemic thinking about emotions) - there is easily enough content about this to fill a book (or two). But I hope I gave you some idea.

Time end: 15:15:42

Writing stats: 31 minutes, 23 wpm, 133 cpm

New LW Meetup: Zurich

1 FrankAdamek 21 October 2016 10:47AM

This summary was posted to LW Main on October 21st. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] The Leverhulme Centre for the Future of Intelligence officially launches.

1 ignoranceprior 21 October 2016 01:22AM

[Link] H+Pedia opens projects and editorial portal

1 Deku-shrub 16 October 2016 09:41PM

[Link] Wikipedia book based on betterhumans' article on cognitive biases

1 MathieuRoy 14 October 2016 01:03AM

[Link] An attempt in layman's language to explain the metaethics sequence in a single post.

1 Bound_up 12 October 2016 01:57PM

Weekly LW Meetups

1 FrankAdamek 30 September 2016 02:48PM

This summary was posted to LW Main on September 30th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] Tech behemoths form artificial-intelligence nonprofit

1 Gleb_Tsipursky 29 September 2016 04:29AM

Weekly LW Meetups

1 FrankAdamek 23 September 2016 03:52PM

Open thread, Sep. 12 - Sep. 18, 2016

1 MrMind 12 September 2016 06:49AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe

1 morganism 10 September 2016 07:13PM

"The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties."

“For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order.” These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones."

Interesting article, and just diving into the paper now, but it looks like this is a big boost to the simulation argument. If the universe is built like a game engine, with stacked sets like Mandelbrots, then the simplicity itself becomes a driver in a fabricated reality.

 

https://www.technologyreview.com/s/602344/the-extraordinary-link-between-deep-neural-networks-and-the-nature-of-the-universe/

Why does deep and cheap learning work so well?

http://arxiv.org/abs/1608.08225

Risks from Approximate Value Learning

1 capybaralet 27 August 2016 07:34PM

Solving the value learning problem is (IMO) the key technical challenge for AI safety.
How good or bad is an approximate solution?

EDIT for clarity:
By "approximate value learning" I mean something which does a good (but suboptimal from the perspective of safety) job of learning values.  So it may do a good enough job of learning values to behave well most of the time, and be useful for solving tasks, but it still has a non-trivial chance of developing dangerous instrumental goals, and is hence an Xrisk.

Considerations:

1. How would developing good approximate value learning algorithms effect AI research/deployment?
It would enable more AI applications.  For instance, many many robotics tasks such as "smooth grasping motion" are difficult to manually specify a utility function for.  This could have positive or negative effects:

Positive:
* It could encourage more mainstream AI researchers to work on value-learning.

Negative:
* It could encourage more mainstream AI developers to use reinforcement learning to solve tasks for which "good-enough" utility functions can be learned.
Consider a value-learning algorithm which is "good-enough" to learn how to perform complicated, ill-specified tasks (e.g. folding a towel).  But it's still not quite perfect, and so every second, there is a 1/100,000,000 chance that it decides to take over the world. A robot using this algorithm would likely pass a year-long series of safety tests and seem like a viable product, but would be expected to decide to take over the world in ~3 years.
Without good-enough value learning, these tasks might just not be solved, or might be solved with safer approaches involving more engineering and less performance, e.g. using a collection of supervised learning modules and hand-crafted interfaces/heuristics.

2. What would a partially aligned AI do? 
An AI programmed with an approximately correct value function might fail 
* dramatically (see, e.g. Eliezer, on AIs "tiling the solar system with tiny smiley faces.")
or
* relatively benignly (see, e.g. my example of an AI that doesn't understand gustatory pleasure)

Perhaps a more significant example of benign partial-alignment would be an AI that has not learned all human values, but is corrigible and handles its uncertainty about its utility in a desirable way.

Weekly LW Meetups

1 FrankAdamek 19 August 2016 03:40PM

This summary was posted to LW Main on August 19th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Avoiding collapse: Grand challenges for science and society to solve by 2050

1 morganism 15 August 2016 05:47AM

"We maintain that humanity’s grand challenge is solving the intertwined problems of human population growth and overconsumption, climate change, pollution, ecosystem destruction, disease spillovers, and extinction, in order to avoid environmental tipping points that would make human life more difficult and would irrevocably damage planetary life support systems."

 

pdf onsite

https://elementascience.org/articles/94

Advice to new Doctors starting practice

1 anandjeyahar 07 August 2016 08:27AM

Hi all,

Please read the Disclaimers at the end of the post first, if you're easily offended.

 

Generalists(general medicine):

  1. Get extremely unbeatable at 20 Questions(rationality link). It'll help you make your initial diagnoses(ones based on questions about symptoms) faster and more accurate.
  2. Understand probability, bayes theorem and how to apply it** This will help you interpret the test results, you ordered based on the 20 questions.
  3.  Understand base rate fallacy, and how to avoid  being over confident.
  4. Understand the upsides and downsides of the drugs you prescribe. Know the probabilities of fatal and adverse side-effects and update them with evidence(Bayes' theorem mentioned above) as you try out different brands and combinations.
  5. Know the costs and benefits of any treatment and help the patient make a good decision based on the cost-benefit analysis of treatment combined with the probabilities of outcome.
  6. Ask and Keep a history of medical records and allergies of the patient and till their grand parents.*
  7. Be willing and able to judge, when a patient is better off with a specialist. Try to keep in touch with Doctors nearby and hopeful all types of specialists.
  8. Explain the treatment options and pros and cons in easy language to the patients. It'll reduce misunderstandings and eventually dis-satisfaction with the treatment.
  9. Resist the urge to treat patients as NPCs. Involve them in the treatment process.
  10. Meditate
  11. Find a hobby, that you can keep improving on till the end of life.
  12. Be aware of the conflict of interest between the patient and the pharmaceutical companies.
  13. Have enough research skills to form opinions on base rates/probabilities in different diseases and treatment methods as needed.
  14. If you're in a big hospital setup, make sure you've the best hospital administration. 
  15. Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.Scheduling, filing and communication. Lacking these, medical expertise is meaningless. 

Specialists:

Basically the same skill sets as above. One difference is in the skill level and you should customize that as needed.

  1. For ex: You would need to be able to explain the treatment options and the probabilistic nature of the outcomes to your patients.
  2. As for research, keep a track of progress in your area in treatment methods and different outcomes on the "quality of life" for the patients after the treatment.
  3. Better applied Bayesian skills. In the sense of figuring out independent variables and their probabilities affecting the outcome.

 

Some controversial ideas(Better use your common-sense before trying out):

  1.  Experiment a little with your bio-chemistry and see how they affect your thought-processes.  To be safe, stick to biologically produced ones. For ex: injecting self with a small adrenalin dose and monitoring bodily response can help keep your thinking clear in emergency situations.
  2. Know your self biology better. For ex: male vs female differences mean the adrenalin response is different and peaks later in females.  If you think that's wrong, please go back and check your course work. Also watch this 2 hour video and come back with objections after reading the studies he quotes.
  3. Keep regularly(whatever frequency your practice and nature of work demands) checking your(for ex; hormone levels) blood states, so that you  can start regulating your self for optimal decision-making skills.
  4. If you're a woman, you'll customize practice on some of the skill sets above differently. For ex: Mastery over emotions might need more practice, while empathizing/connecting with the patient might be easier.

Disclaimers:

  1. Most of what follows is based on my experiences(either as a patient myself or a concerned relative) with Indian Doctors. Some of it may be trivial, to others, but most of it is skills a doc will need and ignored in school.
  2.  I've split it in two (specialists and generalists) but there's a fair amount of overlap.
  3. These are fairly high standards, but worth shooting for and I've kept the focus on smart rather than hard work.
  4. I've stayed from a few topics like: bedside manners/social skills, specific medical treatments and conditions(obviously, I'm not a Doctor after all) and a few others, you can add/delete(also specify/pick levels) as you see fit.
  5. Pick the skill-levels as demanded by your client population and adjust.
  6. I'm assuming generalists, don't have to deal with emergency cases, but in some parts, that's not likely then pick common emergency areas and follow specialist advice.
  7. I wrote this based on my experiences and with humans in mind, but veterinary Doctors may find some useful too.

* -- I understand this is difficult in Indian circumstances, but I've seen it being done manually(simply leaves of prescriptions organized alphabetically, link to dr.rathinavel) , so it's possible and worth the effort unless, you practice in area of highly migratory population.(for example rural vs urban areas).

**-- If you're trying to compete on availability for consultation, you'll need to be able to do this after being woken in the middle of the night.

 

I'm hoping to convert it into a rationalist skills for Doctors Wiki page, so please provide feedback, especially if you're practicing Doctors. If you don't want to post publicly email me(in profile) or comment on wordpress.

August 2016 Media Thread

1 ArisKatsaris 01 August 2016 07:00AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Weekly LW Meetups

1 FrankAdamek 29 July 2016 03:45PM

This summary was posted to LW Main on July 29th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] Citizen scientist space projects

morganism 28 October 2016 09:56PM

Philosophical theory with an empirical prediction

0 mgin 28 October 2016 04:14PM

I have a philosophical theory which implies some things empirically about quantum physics, and I was wondering if anyone knowledgeable on the subject could give me some insight.

It goes something like this:

As an anathema to reductionists, quarks (and by "quarks" I just mean, whatever are the fundamental particles of the universe) are not governed by simple rules a la conway's game of life, but rather, like all of metaphysics goes into their behavior.

The reductionist basically reduces metaphysics to the simple rules that govern quarks. Fundamentally there is no other identity or causality, everything else is just emergent from that, anything we want to call "real" that we deal with in ordinary experience, does not have any metaphysical identity or causal efficacy of its own, it's just an illusion produced by tons of atoms bouncing around. If the universe is akin to conway's game of life, then I don't think the things we see around us are actually what we think they are. They don't have any real identity on a metaphysical level, but rather they are just patterns of particles in motion, governed by mathematically simple rules.

But suppose there actually is metaphysical identity and causal power in the things around us, well the place I can see for that, is that the unknown rules governing quarks, are not mathematically simple rules, but literally that's where all of metaphysics is contained, quarks entangle together according to high level concepts corresponding to the things we see around us, including a person's identity, and have not the mathematically simple causal powers like conway's game of life, but the causal powers of the identity of the high-level agent.

The empirical question is this: do we observe the fundamental particles of the universe behaving according mathematically simple rules, or do they seem to behave in complex/unpredictable ways depending on how they are entangled / what they are interacting with?

 

Adding an example to clarify:

The behavior of the quarks corresponds to the identity of the things we see around us. The things we see around us are constituted by quarks - but the question is, are these quarks behaving mindlessly as billiard balls, or is their behavior the result of complex rules corresponding to the identity of the thing they form?

In other words, suppose we're talking about a living ant, are the quarks which constitute that ant behaving according to simple mathematical rules like billiard balls, and the whole concept of there being an "ant" is just an illusion produced by these particles bouncing around, or are these quarks constituting the ant actually behaving "ant-like"?

Is the causal behavior of the ant determined by the billiard-ball interactions of quarks bouncing around, or does the causal behavior actually originate in the identity of the ant, with the quark interactions being decided according to its nature?

What I'm saying is that there metaphysically is such a thing as an ant, when quarks "get together as an ant", they behave differently, they behave ant-like. Given there is a lot of unknown on exactly why quarks behave the way they do, why is this ruled out: that when they "get together as an ant", they behave ant-like?

Basically the idea is, when it comes to the interactions of the quarks constituting the ant with the quarks constituting the things the ant interacts with, the behavior of those interactions is determined not by simple, universal rules of quark behavior, but by the rules of quark behavior that are in effect "when the quarks are an ant".

To further clarify this example:

This is framed in general terms, because I don't actually know any quantum physics, but I'm talking about the fundamental physical particles ("quarks", for lack of a better term), and their behavior at the quantum level - behavior which we don't fully understand. So one could say in general terms, sometimes the quarks "swerve left" and other times they "swerve right", and we don't exactly know why they do that in any given case.

So the question is, suppose the behavior of quarks in general is not determined by simple, universal laws of quark behavior, e.g. "always swerve left 50% of the time", but rather, there are metaphysically real and physically meaningful "quark groups", like if a bunch of quarks are entangled together in a group constituting what we'd observe to be an ant, then quarks in that quark group behave differently. So for example, the quarks in that "ant quark group" might always swerve left when they interact with another quark group of a different kind.

Weekly LW Meetups

0 FrankAdamek 28 October 2016 03:47PM

Your Truth Is Not My Truth

0 Bound_up 28 October 2016 01:35PM

Can someone help me dissolve this, and give insight into how to proceed with someone who says this?

 

What are they saying, exactly? That the set of beliefs in their head that they use to make decisions is not the same set of beliefs that you use to make decisions?

 

Could I say something like "Yes, that's so, but how do you know that your truth matches what is in the real world? Is there some way to know that your truth isn't only true for you, and not actually true for everybody?"

 

I'm trying to get a feel for what they mean by "true" in this case, since it's obviously not "matching reality."

[Link] Slashdot: study Finds Little Lies Lead To Bigger Ones

0 Gunnar_Zarncke 26 October 2016 06:53AM

[Link] Scientists Create AI Program That Can Predict Human Rights Trials With 79 Percent Accuracy

0 Gunnar_Zarncke 26 October 2016 06:47AM

Weekly LW Meetups

0 FrankAdamek 14 October 2016 03:56PM

This summary was posted to LW Main on October 14th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] GiveWell: A case study in effective altruism, part 1

0 philh 14 October 2016 10:46AM

[Link] Quantum Bayesianism

0 morganism 08 October 2016 11:27PM

Weekly LW Meetups

0 FrankAdamek 07 October 2016 03:58AM

This summary was posted to LW Main on October 7th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] My latest around of internet urban legend research: Deep web secrets

0 Deku-shrub 28 September 2016 07:12PM

Problems with learning values from observation

0 capybaralet 21 September 2016 12:40AM

I dunno if this has been discussed elsewhere (pointers welcome).

Observational data doesn't allow one to distinguish correlation and causation.
This is a problem for an agent attempting to learn values without being allowed to make interventions.

For example, suppose that happiness is just a linear function of how much Utopamine is in a person's brain.
If a person smiles only when their Utopamine concentration is above 3 ppm, then an value-learner which observes both someone's Utopamine levels and facial expression and tries to predict their reported happiness on the basis of these features will notice that smiling is correlated with higher levels of reported happiness and thus erroneously believe that it is partially responsible for the happiness.

------------------
an IMPLICATION:
I have a picture of value learning where the AI learns via observation (since we don't want to give an unaligned AI access to actuators!).
But this makes it seem important to consider how to make an un unaligned AI safe-enough to perform value-learning relevant interventions.

Weekly LW Meetups

0 FrankAdamek 09 September 2016 03:48PM

This summary was posted to LW Main on September 9th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

View more: Prev | Next