Open thread, Oct. 19 - Oct. 25, 2015

3 Post author: MrMind 19 October 2015 06:59AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (198)

Comment author: username2 19 October 2015 10:42:24AM 10 points [-]

Luke quotes from Superforecasting on his site:

"Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources—from the New York Times to obscure blogs—that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives."

wishing to get his hands on this program.

Does anyone know of something similiar, or who this 'Doug' may be? I wonder if this may be as simple as simply asking this man. The book gives 'Doug Lorch' as his full name. Google gives a facebook account as first result, but I have no idea if this is an actual match.

Comment author: ChristianKl 19 October 2015 01:58:37PM *  5 points [-]

The facebook account links to a blog : http://newsandold.blogspot.de/ The blog indicates that he's politically knowledgeable. The facebook account said that he worked at IBM when the superforcaster was reported as a retired computer programmer.

I think he's your man ;) The facebook account only has 12 friends so it doesn't seem to be very active. But it's worth a try to contact him.

Comment author: signal 04 January 2016 08:59:22PM 0 points [-]

Did anything come from this? Would love to see that, too!

Comment author: John_Maxwell_IV 19 October 2015 07:47:39AM 9 points [-]
Comment author: Clarity 19 October 2015 08:19:14AM 3 points [-]

Actually very high quality subreddit. I'm impressed.

Comment author: Soothsilver 19 October 2015 07:41:43PM 3 points [-]

I never realized how many people there are who say "it's a good thing if AI obliterates humanity, it deserves to live more than we do".

Comment author: OrphanWilde 19 October 2015 08:21:10PM 3 points [-]

On some level, the question really comes down to what kind of successors we want to create; they aren't going to be us, either way.

Comment author: ChristianKl 19 October 2015 10:53:20PM 5 points [-]

On some level, the question really comes down to what kind of successors we want to create; they aren't going to be us, either way.

That depends on whether you plan to die.

Comment author: OrphanWilde 19 October 2015 10:59:36PM 5 points [-]

If I didn't, the person I become ten thousand years from now isn't going to be me; I will be at most a distant memory from a time long past.

Comment author: Soothsilver 20 October 2015 01:13:23PM 2 points [-]

It will still be more "me" than paperclips.

Comment author: OrphanWilde 20 October 2015 01:29:54PM 3 points [-]

Than paperclips, yes. Than a paperclip optimizer?

Well... ten thousand years is a very, very long time.

Comment author: passive_fist 20 October 2015 02:52:27AM -1 points [-]

It's a perfectly reasonable position when you consider that humanity is not going to survive long-term anyway. We're either going extinct and leaving nothing behind, evolving into something completely new and alien, or getting destroyed by our intelligent creations. The first possibility is undesirable. The second and third are indistinguishable from the point of view of the present (if you assume that AI will be developed far enough into the future that no current humans will suffer any pain or sudden death because of it).

Comment author: Soothsilver 20 October 2015 01:12:54PM 5 points [-]

You might still want your children to live rather than die.

Comment author: Gurkenglas 21 October 2015 09:10:05PM *  2 points [-]

The questions asked there mostly seem basic and answered by some sequence or another. Maybe someone should make a post pointing out the most relevant sequences so those people can be thinking about the unsolved problems on the frontier?

Comment author: John_Maxwell_IV 26 October 2015 07:56:05AM *  1 point [-]

Great idea. I commission you for the task! (You might also succeed in collecting effective critiques of the sequences.)

Comment author: Viliam 21 October 2015 10:20:34AM 2 points [-]

If you post an article there, it is subtitled "self.ControlProblem". Seems like many people there have a problem with self control. :D

Comment author: gjm 23 October 2015 02:57:54PM 7 points [-]

If any moderator is reading this: user denature123 has posted large quantities of ugly spammy comments; if s/he and they could be blown away, that would be nice.

Comment author: iarwain1 19 October 2015 02:05:42PM *  7 points [-]

What makes a good primary care physician and how do I go about finding one?

Comment author: Sjcs 20 October 2015 10:29:35AM *  5 points [-]

Off the top of my head, the most reliable way would be to ask another senior medical professional - senior as they would tend to have been in the same geographic area for a while and know their colleagues, plus have more direct contact with primary care physicians. Also, rather than asking "who should i see as my primary care physician", you could ask "who would you send your family to see?". This might help prevent them from just recommending a friend/someone with whom they have a financial relationship. I note that this would be relatively hard to do unless you already know a senior medical professional.

Another option would be to ask a medical student (if you happen to know any in your area) which primary care physicians teach at their university and they would recommend. Through my medical training I have found that teaching at a medical school to be weak-to-moderate evidence of being above average. Asking a medical student would help add a filter for avoiding some of the less competent ones, strengthening this evidence

I think lay-people's opinions correlate much more strongly with how approachable and nice their doctor is, as opposed to competence. Doctor rating sites could be used just to select for pleasant ones, if you care about that aspect.

(caveats: opinion based; my experience is limited to the country i trained in; I am junior in experience)

Comment author: Tem42 19 October 2015 10:26:55PM 3 points [-]

Ask everyone you know; ask for their recommendations, and ask why they make those recommendations. Most of the answers you get will not be worth much, but look for the good answers; you only need one.

The trick here is that while it is nearly impossible to find the perfect doctor through any method, you are only looking for a good doctor. Any reasonable recommendation followed by a quick Google search (Google allows reviews on doctors, and most established doctors in larger cities will have at least one or two) to weed out the bad apples will do. This is one of those situations where the perfect is the enemy of productivity.

Comment author: ChristianKl 19 October 2015 10:50:44PM *  1 point [-]

On what basis do you belief that publically posted reviews of doctors correlate with the quality of the medical ability of the doctor?

Comment author: Tem42 20 October 2015 01:21:35AM 3 points [-]

I don't assume much of a reliable correlation; but it doesn't require much. Once you have found a likely few doctors, it is worth finding out if a lot of people hate one of them -- particularly if they explain why. It's basically a very cheap way to filter out potential problems. If I felt that there was a strong correlation, I would have recommended starting with the Google reviews -- after all, Googling is much more time expedient than talking to people.

For context, of the few doctors I sampled on Google review, I found none of them to have anything significant posted in their review. The worst I saw was "receptionist was very rude!"

Given two or more okay choices of doctors given by friends and acquaintances, I think that it is fair to apply this sort of filter, even if you have weak evidence that it is effective. The worst that will happen is that you make the other good choice, rather than the good choice you would have made. The best that might happen is that you avoid an unpleasant experience (well, the best is that you lower your chances of dying through physician error). This calculation may change if you have only one doctor under consideration.

Comment author: ChristianKl 20 October 2015 08:58:16AM 0 points [-]

The best that might happen is that you avoid an unpleasant experience

If a doctor tells you the straight truth about what you have to change in your life that can be unpleasant. I think it can lead to bad reviews. I don't know whether it's useful to avoid those doctors on the other hand. Defensive medicine doesn't seem to be something to strive for.

Comment author: Tem42 20 October 2015 10:52:52AM 1 point [-]

Yes, but if you are reading the reviews, you will be able to determine if they are useful to you. Many will not be. You should certainly be applying the same critical thinking skills that you used when hearing recommendations from your friends in the first place.

I am assuming that there are useful negative comments, although I haven't seen any yet. (My interpretation was that this was because I was only looking at good doctors to start with). If you have a useful comment on any doctor you have seen, please do add it -- it could save someone some trouble.

Comment author: Lumifer 19 October 2015 04:18:22PM 3 points [-]

What makes a good primary care physician

First of all, competence and skill.

Just like everyone else, doctors vary in how good they are. Unfortunately, there is a popular meme (actively promulgated by the doctors guild) that all doctors are sufficiently competent so that any will do. That's... not true.

Given this, it's shouldn't be surprising that finding out the particular doctor's competency ex ante is hard to impossible (unless s/he screwed up so hard, s/he ran into trouble with the law or the medical board). Typically you'll have to rely on proxies (e.g. the reputation of the medical school s/he went to).

Beyond that, things start to depend on what do you need a doctor for. If you have a condition to be treated, you probably want a specialist in that (even primary care physicians have specializations). If you want to run a lot of tests on yourself, you want a doctor who's amenable to ordering whatever tests you ask him for. Etc., etc.

Comment author: Fluttershy 19 October 2015 02:46:48PM 2 points [-]

This is a great question, and I'm glad that you asked, since I am interested in hearing what people think about this as well. I suppose that word of mouth is generally superior to, say, just searching for a primary care doctor through your insurance provider's website, but I don't have any more specific ideas than that.

Personally, I can, and often have, put off going to the doctor due to akrasia, so I put a bit of extra weight on how nice the doctor is-- having a nice doctor lowers the willpower-activation-energy needed for me to make an appointment. I also think that willingness to spend time with patients is important, but I'd be more likely to think this than the average person-- I'm pretty shy, so I'll often tell my doctors that I don't have any more questions (when I actually do) if they seem like they're in a hurry, so as to not bother them.

Comment author: raydora 20 October 2015 12:21:01AM *  0 points [-]

I don't have any surefire methods that don't require a very basic working knowledge of medicine, but a general rule of thumb is the physician's opinion of the algorithmic approach to medical decision making. If it is clearly negative, I'd be willing to bet that the physician is bad. Not quite the same as finding a good one, but decent for narrowing your search.

Along with this, look for someone who thinks in terms of possibilities rather than certainties in diagnoses.

All assuming you're looking for a general practitioner, of course. I wouldn't select surgeons based on this rule of thumb, for instance.

If you're looking for someone who simply has good tableside manner, then reviews and word of mouth do work.

Comment author: Dorikka 20 October 2015 02:49:15AM 2 points [-]

Any particular evidence in favor of this approach, anecdotal or otherwise?

Comment author: raydora 08 November 2015 03:29:06AM 1 point [-]

Late reply, I know!

Standardizing decisions through checklists and decision trees has, in general, shown to be useful if the principles behind those algorithms are based on a reliable map. In medical practice, that's probably the evidence-based medicine approach to screening, diagnosis, and treatment.

In addition, all this assumes that patient management skills are not a concern, since it's not something I personally consider important (from the point of view of a patient) when considering a provider of any medical or technical service. If you typically require more from your physician (and many people do see physicians as societal pillars and someone to talk to their non-medical problems about) than medical evaluation and treatment, then it is something to keep in mind.

Anecdotally, every medical provider I've encountered who was a vocal opponent of clinical decision support systems had a tendency to jump to dramatic conclusions that were later proven wrong.

This is one of the few studies on the subject that isn't behind a paywall.

Comment author: qmotus 19 October 2015 08:17:13AM *  7 points [-]

It's often entertained on LessWrong that if we live in some sort of a big world, then conscious observers will necessarily be immortal in a subjective sense. The most familiar form of this idea is quantum immortality in the context of MWI, but arguably a similar sort of what I would call 'big world immortality' is also implied if, for example, we live in another sort of multiverse or in a simulation.

It seems to me that big world scenarios are well accepted here, but that a lot of people don't take big world immortality very seriously. This confuses me, and I wonder if I'm missing something. I suppose that there are good counterarguments that I haven't come across or that haven't actually been presented yet because people haven't spent that much time thinking about stuff like this. The ones I have read are from Max Tegmark, who's stated that he doesn't believe quantum immortality to be true because death is a gradual, not a binary process, and (in Our Mathematical Universe) because he doesn't expect the necessary infinities to actually occur in nature. I'm not sure how credible I find these.

So, should we take big world immortality seriously? I'd appreciate any input, as this has been bothering me quite a bit as of late and had a rather detrimental effect on my life. Note that I'm not exactly very thrilled about this; to me, this kind of involuntary immortality, that nevertheless doesn't guarantee that anyone else will survive from an observers point of view, sounds pretty horrible. David Lewis presented a very pessimistic scenario in 'How Many Lives Has Schrödinger's Cat' as well.

Comment author: Kaj_Sotala 21 October 2015 05:47:07PM 6 points [-]

So, should we take big world immortality seriously?

Whether or not we take it seriously doesn't seem to have any effect on how we should behave as far as I can tell, so what would taking it seriously imply?

Comment author: qmotus 22 October 2015 01:37:35PM 0 points [-]

I mostly wanted to hear opinions on whether to believe it or not. But anyways, I'm not so sure that you're correct. I think we should find out whether big world immortality should affect our decisions or not. If it is true then I believe that we should, for instance, worry quite a bit about the measure of comfortable survival scenarios versus uncomfortable scenarios. This might have implications regarding, for example, whether or not to sign up for cryonics (I'm not interested in general, but if it significantly increases the likelihood that big world immortality leads to something comfortable, I might) or worrying about existential risk (from a purely selfish point of view, existential risk is much more threatening if I'm guaranteed to survive no matter what, but from my point of view no one else is, than in the case where it's just as likely to wipe me out as anyone else).

Comment author: entirelyuseless 22 October 2015 05:07:08PM 0 points [-]

If you're going to worry about things like that if big world immortality is true, you can just worry about them anyway, because the only thing that you will ever observe (even if big world immortality is false) is that you always continue to survive, even when other people die, even from things like nuclear war.

Your observations will always be compatible with your personal immortality, no matter what the truth is.

Comment author: qmotus 28 October 2015 06:32:40PM 0 points [-]

Well, sort of, but I still think there is an important difference in that without big world immortality all the survival scenarios may be so unlikely that they aren't worthy of serious consideration, whereas with it one is guaranteed to experience survival, and the likelihood of experiencing certain types of survival becomes important.

Let's suppose you're in a situation where you can sacrifice yourself to save someone you care about, and there's a very, very big chance that if you do so, you die, but a very, very small chance that you end up alive but crippled, but the crippled scenarios form the vast majority of the scenarios in which you survive. Wouldn't your choice depend at least to some degree on whether you expect to experience survival no matter what, or not?

Comment author: RowanE 20 October 2015 05:40:49AM 2 points [-]

I consciously will myself to believe in big world immortality, as a response to existential crises, although I don't seem to have actual reasons not to believe such besides intuitions about consciousness/the self that I've seen debated enough to distrust.

Comment author: qmotus 20 October 2015 08:24:12AM 0 points [-]

So did I understand correctly, believing in big world immortality doesn't cause you an existential crisis, but not believing in it does?

Comment author: RowanE 22 October 2015 10:11:01PM 0 points [-]

Yes - I mean existential crisis in the sense of dread and terror from letting my mind dwell on my eventual death, convincing myself I'm immortal is a decisive solution to that insofar as I can actually convince myself. I don't mind existence being meaningless, it is that either way, I care much more about whether it ends.

Comment author: qmotus 28 October 2015 06:23:02PM 0 points [-]

So you're not worried that it might be unending but very uncomfortable?

Comment author: passive_fist 20 October 2015 02:47:27AM 2 points [-]

Some tangential food for thought: My grandfather died recently after a slow and gradual eight-year decline in health. He suffered from a kind of neurodegenerative disorder with symptoms including various clots and plaques in his brain that gradually increased in size and number while the functioning proportion of his brain tissue decreased.

During the first year he had simple forgetfulness. In the second year it progressed to wandering and excessive eating. It then slowly progressed to incontinence, lack of ability to speak, and soon, lack of ability to move. During his final three years he was entirely bedridden and rarely made any voluntary motor movements even when he was fully awake. His muscle mass had decreased to virtually nothing. During his last month he could not even perform the necessary motor movements to eat food and had to go on life support. When he finally did die, many in the family said it didn't make any difference because he was already dead. I was amazed that he held out as long as he did; surely his heart should have given out a long time ago.

Was I a witness to his gradual dissolution in a sequence of ever-increasingly-unlikely universes? Maybe in some other thread he had a quick and painless death. Maybe in an even less likely thread, he continued declining in health to an even less likely state of bodily function.

Comment author: qmotus 20 October 2015 08:50:07AM 0 points [-]

Well, that's just sad. But I suppose you should believe that you witnessed a relatively normal course of decline. In more unlikely threads there possibly were quick and painless deaths, continuing declining, and also miraculous recoveries.

I guess the interesting question your example raises, in this context, is this: is there a way to draw a line from your grandfather in a mentally declined state to a state of having miraculously recovered, or is there a fuzzy border somewhere that can only be crossed once?

Comment author: gjm 20 October 2015 04:32:43PM 1 point [-]

It seems to me that a disease that inflicts gross damage to substantial volumes of brain pretty much destroys the relevant information, in which case there probably isn't much more line from "mentally declined grandfather" to "miraculously restored grandfather" than from "mentally declined grandfather" to "grandfather miraculously restored to someone else's state of normal mental functioning" (complete with wrong memories, different personality, etc.).

Comment author: entirelyuseless 19 October 2015 01:27:26PM *  2 points [-]

I think it should be taken seriously, in the sense that there is a significant chance that it is true. I agree that Less Wrong in general tends to be excessively skeptical of the possibility, probably due to an excessive skepticism-of-weird-things in general, and possibly due to an implicit association with religion.

However:

1) It may just be false because the big world scenarios may fail to be true. 2) It may be false because the big world scenarios fail to be true in the way required; for example, I don't think anyone really knows which possibilities are actually implied by the MWI interpretation of quantum mechanics. 3) It may be false because "consciousness just doesn't work that way." While you can argue that this isn't possible or meaningful, it is an argument, not an empirical observation, and you may be wrong. 4) If it's true, it is probably true in an uncontrollable way, so that basically you are going to have no say in what happens to you after other observers see you die, and in whether it is good or bad (and an argument can be made that it would probably be bad). This makes the question of whether it is true or not much less relevant to our current lives, since our actions cannot affect it. 5) There might be a principle of caution (being used by Less Wrong people). One is inclined to exaggerate the probability of very bad things, in order to be sure to avoid them. So if final death is very bad, people will be inclined to exaggerate the probability that ordinary death is final.

Comment author: Kaj_Sotala 21 October 2015 05:43:39PM *  2 points [-]

I agree that Less Wrong in general tends to be excessively skeptical of the possibility, probably due to an excessive skepticism-of-weird-things in general

Of all the things LW has been accused of, this is the first time I see a skepticism-of-weird-things in general being attributed to the site.

Comment author: Lumifer 21 October 2015 06:32:09PM 2 points [-]

While a valid point, LW does have a shut-up-and-just-believe-the-experts wing.

Comment author: qmotus 20 October 2015 08:36:36AM 0 points [-]

Regarding one, two and three: shouldn't we, in any case, be able to make an educated guess? Am I wrong in assuming that based on our current scientific knowledge, it is more likely true than not? (My current feeling is that based on my own understanding, this is what I should believe, but that the idea is so outrageous that there ought to be a flaw somewhere.)

Two is an interesting point, though; I find it a bit baffling that there seems to be no consensus about how infinities actually work in the context of multiverses ("infinite", "practically infinite" and "very big" are routinely used interchangeably, at least in text that is not rigorously scientific).

Regarding four, I'm not so sure. Take cryonics for example. I suppose it does either increase or decrease the likelihood that a person ends up in an uncomfortable world. Which way is it, and how big is the effect? Of course, it's possible that in the really long run (say, trillions of times the lifespan of the universe) it doesn't matter.

Regarding five, I guess so. Then again, one might argue that big world immortality would itself be a 'very bad thing'.

Comment author: Panorama 23 October 2015 10:26:00AM *  6 points [-]

The scientists encouraging online piracy with a secret codeword

What if you're a scientist looking for the latest published research on a particular subject, but you can't afford to pay for it?

...

Andrea Kuszewski, a cognitive scientist and science writer, invented the tag, which uses a code phrase: "I can haz PDF" - a play on words combining a popular geeky phrase used widely online in a meme involving cat pictures, and a common online file format.

"Basically you tweet out a link to the paper that you need, with the hashtag and then your email address," she told BBC Trending radio. "And someone will respond to your email and send it to you." Who might that "someone" be? Kuszewski says scientists who have access to journals, through subscriptions or the institutions they work at, look out for the tag so they can help out colleagues in need.

Comment author: philh 23 October 2015 10:53:44AM 4 points [-]

Amusingly, right now the hashtag seems to be dominated by people talking about the article/phenomenon, not by people trying to get pdfs.

Comment author: ChristianKl 23 October 2015 10:36:15AM 2 points [-]

I'm shocked, shocked...

Comment author: Vladimir_Golovin 21 October 2015 10:41:42AM *  6 points [-]

Just a quick dump of what I've been thinking recently:

  • A train of thought is a sequence of thoughts about a particular topic that lasts for some time, which may produce results in the form of decisions and updated beliefs.

  • My work, as a technical co-founder of a software company, essentially consists of riding the right trains of thought and documenting decisions that arise during the ride.

  • Akrasia, in my case, means that I'm riding the wrong train of thougt.

  • Distraction means some outside stimulus that compels my mind to hop to a different train of thought that my mind is currently riding or should be riding. The stimuli can be anything: people talking to me, a news story, a sexually attractive person across the street, an advertisement, etc.

  • Some train rides are long: they last for hours, days or even weeks, while some are short and last for seconds or minutes. Historically, I've done my best work on very long rides.

  • Different trains of thoughts have different 'ticket costs'. Hopping to a sex-related or a politics-related train of thought is extremely cheap. Caching a big chunk of a problem into my mind requires consciois effort, and thus the ticket is more expensive. In my case, the right trains of thought are usually expensive.

  • Interruptions set back the distance traveled, or, in some cases, completely reset the distance to the original departure station. Or they may switch me to a different train of thought completely, while, at the same time, depleting the resource (willpower?) that I need for boarding the correct train of thought.

  • My not-so-recent decision to stop reading peoplenews has greatly reduced the number and severity of unwanted / involuntary train hops.

  • My "superfocus periods", during which I'm able to ride a single right train of thought for multiple days or weeks, are mostly due to the absence of stimuli that compel my mind to jump to different, cheaper trains of thought. These periods happen when I'm away from work and sometimes from my family, which means I can safely drop my everyday duties such as showing up in the office, doing errands, replying emails, meeting people etc.

  • Keeping a detailed work diary is tremendously helpful for re-boarding the right train of thought after severe interruptions / "cache wipes". I use Workflowy.

  • I noticed that I'm reluctant about boarding long rides when I expect interruptions during the ride. Recent examples include reluctance about reading Bostrom's Superintelligence at home, or reluctance about 'loading' a large piece of project into my head at work, because my office iss full of programmers that ask (completely legitimate) questions about their current tasks.

Comment author: Clarity 20 October 2015 09:55:52AM 6 points [-]

Think you have a finely callibrated and important information diet? Imagine if you had the world's strongest intelligence agency tailor the news for you. Well, you don't have to imagine, because the president's daily briefs have just been declassified. If you're interested, you can collaborate with researchers to get a better handle on it. Enjoy.

Comment author: satt 20 October 2015 09:31:45PM 5 points [-]

For convenience, here's a link to the individual briefs as separate PDF files, for anyone else who doesn't want to download all 34MB at once. (I thought the Flickr page might have a few convenient, face-on snapshots of pages from the briefs, but the CIA reckoned it was more important to take 5 photos of a woman wheeling a trolley of briefs through the CIA lobby. #thanksguys)

I suspect daily presidential briefings from the CIA are finely (as in carefully & deliberately) calibrated but not that well calibrated (as in being accurate, representative and not tendentious). The CIA doubtless has incentives to misrepresent some things to the president — and indeed a president probably has some incentives to allow/encourage being misled about certain things!

Comment author: ChristianKl 20 October 2015 10:52:53AM 4 points [-]

It's an intersting data set but I don't think it's useful as a primary source. Given that the freshest "news" in the pile is from 1977, I don't think the term "news" is appropriate. If you are interested in what happened 40 years ago it might be better to read more recently written history books than contemporary intelligence analysis.

Comment author: Panorama 23 October 2015 01:19:29PM 5 points [-]

Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?

The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.

Comment author: Panorama 23 October 2015 10:12:43AM 5 points [-]

UN climate reports are increasingly unreadable

The climate summary findings of the Intergovernmental Panel on Climate Change (IPCC) are becoming increasingly unreadable, a linguistics analysis suggests.

IPCC summaries are intended for non-scientific audiences. Yet their readability has dropped over the past two decades, and reached a low point with the fifth and latest summary published in 2014, according to a study published in Nature Climate Change1.

The study used the Flesch Reading Ease test, which assumes that texts with longer sentences and more complex words are harder to read. Reports from the IPCC’s Working Group III, which focuses on what can be done to mitigate climate change by cutting carbon dioxide emissions, received the lowest marks for readability.

Confusion created by the writing style of the summaries could hamper political progress on tackling greenhouse-gas emissions, thinks Ralf Barkemeyer, who led the analysis and works on sustainable business management at the KEDGE Business School in Bordeaux, France. The readability scores “are not just low but exceptionally low”, he says.

Comment author: cousin_it 21 October 2015 12:56:00PM *  4 points [-]

Thanks to Turing completeness, there might be many possible worlds whose basic physics are much simpler than ours, but that can still support evolution and complex computations. Why aren't we in such a world? Some possible answers:

1) Luck

2) Our world has simple physics, but we haven't figured it out

3) Anthropic probabilities aren't weighted by simplicity

4) Evolution requires complex physics

5) Conscious observers require complex physics

Anything else? Any guesses which one is right?

Comment author: solipsist 05 December 2015 06:03:58PM *  2 points [-]

Other answers I've considered:

o) Simpler universes are more likely, but complicated universes vastly outnumber simple ones. It's rare to be at the mode, even though the mode is the most common place to be.

p) Beings in simple universes don't ask this question because their universe is simple. We are asking this question, therefore we are not in a simple universe.

2') You don't spend time pondering questions you can quickly answer. If you discover yourself thinking about a philosophy problem, you should expect to be on the stupider end of entities capable of thinking about that problem.

Comment author: IlyaShpitser 24 October 2015 10:30:36PM *  1 point [-]

n) The world is optimized for good theatre, not simplicity.

Comment author: lmm 24 October 2015 05:44:39PM 0 points [-]

My guess is #2.

Comment author: Manfred 21 October 2015 07:30:05PM *  0 points [-]

I'm of the opinion that there isn't going to be a satisfactory answer. It's true that the complexity of our universe makes it more likely that there's some special explanation, but sometimes things just happen. Why am I the me on October 21, and not the me on some other day? Well, it's a hard job, but someone's got to do it.

Comment author: cousin_it 22 October 2015 02:29:23PM *  0 points [-]

That's #1. It would be good to know exactly how lucky we got, though.

Comment author: Dagon 21 October 2015 02:55:36PM 0 points [-]

How do #1 and #3 differ? I think both are "yes, there are many such worlds - we happen to be in this one".

Comment author: polymathwannabe 21 October 2015 04:54:01PM *  1 point [-]

It doesn't sound impossible that anthropic probabilities are weighted by simplicity and we're lucky.

Comment author: Dagon 21 October 2015 07:11:27PM 0 points [-]

Hmm. I think "we're lucky" implies "probabilities are irrelevant for actual results", so it obsoletes #3.

Comment author: cousin_it 21 October 2015 07:42:40PM *  1 point [-]

I think "we're lucky" vs "simplicity is irrelevant" affects how much undiscovered complexity in physics we should expect.

Comment author: Houshalter 19 October 2015 07:36:00PM *  4 points [-]

I've been thinking about some of the issues with CEV. It's come up a few times that humanity might not have a coherent, non-contradictory set of values. And the question of how to come up with some set of values that best represents everyone.

It occurs to me that this might be a problem mathematicians have already solved, or at least given a lot of thought. In the form of voting systems. Voting is a very similar problem. You have a bunch of people you want to represent fairly, and you need to select a leader that best represents their interests.

My favorite alternative voting system is the Condorcet Method. Basically it compares each candidate in a 1v1 election, and selects the candidate that would have won every single election.

It is possible for there not to be a Condorcet winner. If the population has circular preferences. Candidate A > Candidate B > C > A... Like a rock paper scissors thing.

To solve this there are a number of methods developed to select the best compromise. My favorite is Minimax. It selects the candidate who's greatest loss is the least bad. I think that's the most desirable way to pick a winner, and it's also super simple.

There are some differences. Instead of a leader, we want the best set of values and policies for the AI to follow. And there might not be a finite set of candidates, but an infinite number of possibilities. And actually voting might be impractical. Instead an AI might have to predict what you would have voted, if you knew all the arguments and had much time to think about it and come to a conclusion. But I think it can still be modeled as a voting problem.

Now this isn't actually something we need to figure out now. If we somehow had an FAI, we could probably just ask it to come up with the most fair way of representing everyone's values. We probably don't need to hardcode these details.

The bigger issue is why would the person or group building the FAI even bother to do this? They could just take their own CEV and ignore everyone elses. And they have every incentive to do this. It might even be significantly simpler than trying to do a full CEV of humanity. So even if we do solve FAI, humanity is probably still screwed.

EDIT: After giving it some more thought, I'm not sure voting systems are actually desirable. The whole point of voting is that people can't be trusted to just specify their utility functions. The perfect voting system would be for each person to give a number to each candidate based on how much utility they'd get from them being elected. But that's extremely susceptible to tactical voting.

However with FAI, it's possible we could come up with some way of keeping people honest, or peering into their brains and getting their true value function. That adds a great deal of complexity though. And it requires trusting the AI to do a complex, arbitrary, and subjective task. Which means you must have already solved FAI.

Comment author: Tem42 19 October 2015 09:34:30PM 6 points [-]

If I were God of the World, I would model the problem as more of a River Crossing Puzzle. How do you get things moving along when everyone on the boat wants to kill each other? Segregation! Resettling humanity mapped over a giant Venn diagram is trivial once we are all uploaded, but it also runs into ethical problems; just as voting and enacting the will of the majority (or some version thereof) is problematic, so is setting up the world so that the oppressor and the oppressee will never be allowed meet. However, in my experience people are much happier with rules like "you can't go there" and much less happier with rules like "you have to do what that guy wants". This is probably due to our longstanding tradition of private property.

This makes some assumptions as to what the next world will look like, but I think that it is a likely outcome -- it is always much easier to send the kids to their rooms than to hold a family court, and I think a cost/benefit analysis would almost surely show that it is not worth trying to sort out all human problems as one big happy group.

Of course, this assumes that we don't do something crazy like include democracy and unity of the human race as terminal values.

Comment author: gjm 19 October 2015 11:08:07PM 0 points [-]

Segregation!

This puts me in mind of Eliezer's "Failed Utopia #4-2".

Comment author: Lumifer 19 October 2015 07:50:28PM *  4 points [-]

Voting is a very similar problem.

Not quite.

The local population consists of 80% blue people and 20% orange people. For some reason, the blue people dislike orange people. A blue leader arises who says "We must kill all the orange people and take their stuff!" Well, it's an issue, and how do people properly decide on a policy? By voting, of course. Everyone votes and the policy passes by simple majority. And so the blue people kill all the orange people and take their stuff. The end.

Comment author: [deleted] 20 October 2015 05:07:38PM 2 points [-]

This is exactly the type of problems that mathematicians have tried to solve with different voting schemes. One recent example that has the potential to solve this problem is quadratic vote buying, which takes into account strong preferences of minorities.

Comment author: Lumifer 20 October 2015 05:24:05PM 2 points [-]

This is exactly the type of problems that mathematicians have tried to solve

I am not sure this is a mathematical problem. Generally speaking, giving a minority the veto power trades off minority safety against government ability to do things. In the limit you have decision making by consensus which has obvious problems.

quadratic vote buying

What do you buy votes with? Money? Then it's an easy way for the blue people to first take orange people's stuff and then, once the orange people run out of resources to buy votes with, to kill them anyway.

Comment author: [deleted] 20 October 2015 05:37:58PM *  1 point [-]

Generally speaking, giving a minority the veto power trades off minority safety against government ability to do things. In the limit you have decision making by consensus which has obvious problems.

That's precisely why it is a mathematical problem... you need to quantify the tradeoffs, and figure out which voting schemes maximize different value schemes and utility functions. Math can't SOLVE this problem because it's a ought problem, not an is problem.

But you can't answer the ought side of things without first knowing the is side.


In terms of quadratic vote buying, money is only one way to do it, another is to have an artificial or digital currency just for vote buying, for which people get a fixed amount for the year.

I don't think your concept of it really makes sense in the context of modern government with a police force, international oversight, etc. All voting schemes break down when you assume a base state of anarchy - but assuming there's already a rule of law in place, you can maximize how effective those laws are (or the politicians who make them) by changing your voting rules.

Comment author: Lumifer 20 October 2015 05:48:13PM 0 points [-]

That's precisely why it is a mathematical problem... Math can't SOLVE this problem

Ahem.

in the context of modern government with a police force, international oversight, etc.

I would be quite interested to learn who exerts "international oversight" over, say, USA.

Besides, are you really saying a "modern" government can do no wrong??

assuming there's already a rule of law in place, you can maximize how effective those laws are

I'm sorry, I'm not talking about the executive function of the government which merely implements the laws, I'm talking about the legislative function which actually makes the laws. There is no assumption of the base state of anarchy.

Comment author: [deleted] 20 October 2015 05:52:50PM *  -1 points [-]

Ahem

This isn't helpful. There's nothing for me to respond to.

I would be quite interested to learn who exerts "international oversight" over, say, USA.

The UN (specifically, other very powerful countries that trade with the US).

I'm talking about the legislative function which actually makes the laws. There is no assumption of the base state of anarchy.

Would a historical example of what you're talking about be the legality of slavery?

Comment author: Lumifer 20 October 2015 05:58:57PM *  0 points [-]

There's nothing for me to respond to.

Let me unroll my ahem.

You claimed this is a mathematical problem, but in the next breath said that math can't solve it. Then what was the point of claiming it to be a math problem in the first place? Just because dealing with it involves numbers? That does not make it a math problem.

The UN

LOL. Can we please stick a bit closer to the real world?

Would a historical example of what you're talking about be the legality of slavery?

Actually, the first example that comes to mind is the when the US decided that all Americans who happen to be of Japanese descent and have the misfortune to live on the West Coast need to be rounded up and sent to concentration, err.. internment camps.

Comment author: JoshuaZ 23 October 2015 11:58:15PM -1 points [-]

Problems can have a mathematical aspect without being completely solvable by math.

Comment author: WalterL 21 October 2015 08:44:42PM 1 point [-]

I'm not sure that's a fair problem to ascribe to voting. If >50% of that populace wants to kill the orange folks its going to happen, however they select their leaders. It isn't voting's fault that this example is filled with maniacs.

Comment author: Clarity 19 October 2015 11:05:41AM 4 points [-]

what is the future of electronics technicians? Are they a good career choice? Will their skills quickly begun obsolete due to coming hardware changes?

Comment author: Elo 20 October 2015 12:30:29AM 1 point [-]

I think the general consensus is robot-automation. seems like a task that can be done by robot...

Comment author: gjm 24 October 2015 08:57:02AM 3 points [-]

More spam: someone called "lucy" is posting identical nonsense about vampires to multiple threads. Less obnoxious than denature123 yesterday, but still certainly spam.

(I've been assuming that, given that we have multiple moderators, it's better to post comments like this than to PM one or more individual moderators. I will be glad of correction from actual moderators if some other approach is better.)

Comment author: CAE_Jones 24 October 2015 06:21:53PM 0 points [-]

I must admit, had lucy managed to only post the vampire ads in threads about interventions to increase longevity / social skills / etc, I might have considered them worth keeping around for entertainment value. At least then we could use them as an excuse to discuss how blood transfusions from healthy donors affects various quality-of-life factors.

(I wonder how long before someone tries to start a business based around selling healthy blood / fecal transplants / etc, and how long before the FDA tells them to stop before they sell someone diseases.)

Comment author: JoshuaZ 23 October 2015 11:51:19PM 3 points [-]
Comment author: Panorama 23 October 2015 10:16:30AM 3 points [-]

'Zeno effect' verified: Atoms won't move while you watch

One of the oddest predictions of quantum theory – that a system can’t change while you’re watching it – has been confirmed in an experiment by Cornell physicists.

...

Graduate students Yogesh Patil and Srivatsan K. Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid.,But at such low temperatures, the atoms can “tunnel” from place to place in the lattice. The famous Heisenberg uncertainty principle says that the position and velocity of a particle interact. Temperature is a measure of a particle’s motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another.

...

The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can’t see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically.

Comment author: Douglas_Knight 19 October 2015 02:53:15PM 3 points [-]

Do people take advantage of instant run-off voting to "not throw away their vote"?

What do they do in Australia? Where else do people have such systems? I suppose I could just look up Australia, but I fear it might be hard to interpret and I’d rather hear from someone with experience of it.

I ask because the recent British Labour leadership election was very different from the last. I suspect that there was a substantial portion of the electorate who preferred, say, Abbot in 2010, but didn't vote for her because she was not viable. The whole complicated system exists to allow people to simply express their preferences and not put in the strategic voting effort of determining who is viable, but maybe it isn't doing much.

(It is definitely doing something. In 2010, 28% of the vote share went to non-viable candidates. A plurality system applied to those first round votes would have chosen David over Ed.)

Comment author: Irgy 20 October 2015 06:45:50AM *  6 points [-]

As an Australian I can say I'm constantly baffled over the shoddy systems used in other countries. People seem to throw around Arrow's impossibility theorem to justify hanging on to whatever terrible system they have, but there's a big difference between obvious strategic voting problems that affect everyone, and a system where problems occur in only fairly extreme circumstances. The only real reason I can see why the USA system persists is that both major parties benefit from it and the system is so good at preventing third parties from having a say that even as a whole they can't generate the will to fix it.

In more direct answer to your question, personally I vote for the parties in exactly the order I prefer them. My vote is usually partitioned as: [Parties I actually like | Major party I prefer | Parties I'm neutral about | Parties I've literally never heard of | Major party I don't prefer | Parties I actively dislike]

A lot of people vote for their preferred party, as evidenced by more primary votes for minor parties. Just doing a quick comparison, in the last (2012) US presidential election only 1.74% of the vote went to minor candidates, while in the last Australian federal election (2013) an entire 21% of the votes went to minor parties.

Overall it works very well in the lower house.

In the upper house, the whole system is so complicated no-one understands it, and the ballot papers are so big that the effort required to vote in detail prevents most people from bothering. In the upper house I usually just vote for a single party and let their preference distribution be automatically applied for me. Of course I generally check what that is first, though you have to remember to do it beforehand since it's not available while you're voting. Despite all that though, it's a good system I wouldn't want it replaced with anything different.

Comment author: Good_Burning_Plastic 19 October 2015 07:26:22PM 2 points [-]

What do they do in Australia?

This, I hear.

Comment author: Irgy 20 October 2015 06:51:29AM 1 point [-]

That's the result of compulsory voting not of preference voting.

Comment author: Clarity 19 October 2015 09:14:30AM *  5 points [-]

Given the absence of a boasting thread recently. Here's a little boasting:

Helped monitor and proxy for these publically/non-internet user accessible (meaning multiple people can use and post with them through me) Reddit accounts

Identified a potent research interest

Monitored and learning from responses to my LW content as appropriate

Comment author: ike 27 October 2015 01:10:02AM 2 points [-]

Munchkining real estate http://www.bloomberg.com/news/articles/2015-10-23/this-startup-tracks-america-s-murder-houses (I'm referring to the resellers mentioned in the article, not the actual startup covered).

Comment author: Curiouskid 27 October 2015 02:55:20AM 0 points [-]

Another thing I've heard recently, but not looked into much is living in a house boat off of the coast of San Francisco, and then paddling in on a Kayak.

Comment author: mwengler 21 October 2015 02:03:54PM 2 points [-]

There is significant progress in genetic modification of humans and in physical modification/augmentation of humans. It is plausible we will have genetically modified and/or physically modified human intelligence before we have artificial intelligence.

FAI is the pursuit of artificial intelligence constrained in a way that it will not be a threat to unmodified humans. Or at least that is what it seems to be to me as an observer of discussions here, is this a reasonable description of FAI?

It occurs to me that natural human intelligence has certainly not developed with any such constraints. Indeed, if humanity can develop UAI, then that is essentially proof that human intelligence is not Friendly in the sense we wish FAI to be.

Presumably we have been more worried with how to constrain AI to be friendly because AI could learn to self-modify and experience exponential growth and thus overwhelm human intelligence. But what of modified human intelligence, genetic or physical? These ARE examples of self-modification. And they both appear to be capable of inducing exponential growth.

Is the threat from unfriendly human intelligence any less or any different, or worthy of consideration as an existential risk? If an intelligence arises from modified human, is it a threat to unmodified human, or an enhancement on it? How do we define natural and artificial when our purpose in defining it is to protect the one from the other?

Comment author: polymathwannabe 21 October 2015 05:00:03PM 1 point [-]

Human intelligence has already chosen to maximize the burning of oil with no regard for the viability of our biosphere, so we're already living under an Unfriendly Human Intelligence scenario.

Comment author: g_pepper 21 October 2015 03:08:49PM *  1 point [-]

Bostrom discusses this possibility in Superintelligence, both in the form of enhanced biological cognition and in brain/machine interfaces. Ultimately he argues that a super intelligent singleton is more likely to be a machine than an enhanced biological brain. He argues that increases in cognitive ability should be much faster with a machine intelligence than through biological enhancement, and that machine intelligence is more scalable (I believe that he makes the point that, while a human brain the size of a warehouse is not practical, a computer the size of a warehouse is).

Comment author: Lumifer 21 October 2015 02:54:02PM 1 point [-]

human intelligence is not Friendly in the sense we wish FAI to be.

Well, of course it's not. Nobody ever said it is.

capable of inducing exponential growth.

Biologically, on the wetware substrate? I don't think that's possible. And if you mean uploads/ems, the distinction between human and AI becomes somewhat vague at this point...

Comment author: Dagon 21 October 2015 02:53:15PM -1 points [-]

Currently, I'd say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.

There is a valid question of the shape of the improvement curve, and it's at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.

Personally, I'm with you - we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.

Comment author: Lumifer 21 October 2015 03:13:13PM 2 points [-]

we should be looking for ways to engineer friendliness into humans

No. That's a really bad idea.

First, no one even knows what "friendliness" is. Second, I strongly suspect that attempts to genetically engineer "friendly humans" will end up creating genetic slaves.

Comment author: Dagon 21 October 2015 07:10:12PM 1 point [-]

Perhaps. Don't both of those concerns apply to AI as well?

Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).

I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?

Comment author: Lumifer 21 October 2015 07:57:50PM *  2 points [-]

I don't have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?

Sure I do. I'm a speciesist :-)

Besides, we're not discussing what to do or not to do with hypothetical future conscious AIs. We're discussing whether "we should be looking for ways to engineer friendliness into humans". Humans are not hypothetical and "ways to engineer <desirable feature> into humans" are not hypothetical either. They are usually known by the name of "eugenics" and have a... mixed history. Do you have reasons to believe that future attempts to "engineer humans" will be much better?

Comment author: Tem42 21 October 2015 10:31:15PM 1 point [-]

For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool -- through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.

We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born -- that is, changes that do not take anyone out of the gene pool.

Comment author: Dagon 22 October 2015 01:36:18PM -1 points [-]

Sure I do. I'm a speciesist :-)

I probably am too, but I don't much like it. I want to be a consciousness-ist.

Most humans are hypothetical, just like all AIs are. They haven't existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.

I am merely pointing out that most of what I've read about FAI goals seems to apply to future humans as much or more as to future AIs.

Comment author: ChristianKl 21 October 2015 09:04:20PM 0 points [-]

Personally, I'm with you - we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.

As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.

Comment author: Clarity 21 October 2015 10:39:37AM 2 points [-]

I notice I boast about things without even considering if they're things others will find impressive or shameful - like not attending class. it's a bad habit not to exercise my consideration, empathy and/or theory of mind more. Reckon I've identified the right failure mode here or I'm misattributing?

Comment author: ChristianKl 21 October 2015 12:11:11PM 0 points [-]

I think you get it roughly right.

I recently added a quote of Dannett to the rationality quote thread that fits here:

We really have to think of reasoning the way we think of romance, it takes two to tango. There has to be a communication.

You don't have to communicate to everybody but it's very worthwhile to have two way conversations about your important habits with other people. It's important for mental sanity.

Comment author: Clarity 19 October 2015 08:18:31AM 2 points [-]

Reintroducing the most meta-concept I'm aware of: integrative complexity!

Comment author: Lumifer 23 October 2015 02:55:05PM *  3 points [-]

Yo, mods! A mop and a bucket are needed at the forums to clean up after some script!

Comment author: Clarity 19 October 2015 02:20:38PM -1 points [-]

Why do I find nasal female voices so sexy? Even languages that emphasise nasality are sexy to me, like Chinese or French, whereas their language group companions with other linguistic similarities do not (e.g. Spanish in the case of French). Is there anything I can do to downregulate my nasal voice fetish?

Comment author: polymathwannabe 19 October 2015 02:47:34PM 4 points [-]

Why do you want less of something you like?

Comment author: [deleted] 19 October 2015 04:08:35PM 4 points [-]

Presumably "nasal voices" aren't a terminal goal for clarity, and he'd like it to stop clouding his judgement of other characteristics that are more important in finding someone he enjoys.

Comment author: Clarity 20 October 2015 01:25:07AM 2 points [-]

Yes that's right

I want more of something I like, but on the precondition that I want to like something I like. However, there is nothing I like that I have reason to like exclusive of all other likes so if I can like something less, all else constant, it becomes easier for me to satisfy my like for the remainder of things. Therefore, it is instrumental to my terminal goals to like goals in both strict preferences and to eliminate them in decreasing order of preference

Comment author: satt 21 October 2015 02:20:55AM 0 points [-]

Operant conditioning...? Like exposing yourself to some stereotypically anti-sexy stimulus when you notice a nasal voice. (Not that I expect this to have much effect, but who knows?)

Comment author: Clarity 22 October 2015 08:53:12AM 0 points [-]

Attention everyone excellent, in one way or another!

  • What are the determinants of success for an amateur on their path to expertise in an area you are exceptional at that isn't already described accurately on Lesswrong?

  • Describe a rough estimate of the variance in the success of entrants to the field that can be attributed to each determinant you identify?

No time for modesty now, you're hear to teach and learn!

Ask not what LessWrong can do for you, but what you can do for LessWrong!

Comment author: Clarity 21 October 2015 11:57:14PM 0 points [-]
  • MIRI's research guide is definately overkill for interpreting it's individual papers.
  • So, I have reason to believe it will be overkill for interpreting it's technical research agenda.
  • Has anyone done a kind of annotation of the thing that is more amateur friendly?
  • Surely it's in MIRI's best interest to make it more accessible as to compel potential benefactors to support their research?
Comment author: MrMind 21 October 2015 07:04:18AM 1 point [-]

How do you feel about floating posts in the Discussion section?
Like: electing a few threads that stay at top for the month/week they are active, the open threads, the monthly media thread, etc.
Is that even possible with LW code?

Comment author: username2 21 October 2015 11:05:54AM 0 points [-]

Is the LW code open source? If not, why not ? Is it a fork of the reddit code ? Can we update the reddit specific code ? (Reddit allows sticky posts since 2013 if I'm not mistaken) Who takes care of the site ?

Comment author: ChristianKl 21 October 2015 11:59:14AM 2 points [-]

Yes, the code is open source. Yes, it's a fork of the reddit code. TrikeApps takes care of the website as a volunteer effort to help MIRI. But LW isn't a high priority for either of them.

The code is on Google Code: https://code.google.com/p/lesswrong/

Comment author: username2 21 October 2015 01:39:14PM 3 points [-]

The github link on the source code page is dead but I managed to find this. The code hasn't been touched since 2009. Would it be possible that the same iteration of the code is the one that currently powers lesswrong ?

https://github.com/jsomers/lesswrong

Comment author: Clarity 21 October 2015 01:11:46AM 1 point [-]

I've finally gotten to reading a bunch of MIRI papers. I don't pretend to understand as they are meant to be understood. Can it be predicted whether a maths problem is solved, solvable, unsolved or unsolvable? I feel really...dismayed and discouraged reading through MIRI's work. I feel as though they are trying to solve questions that cannot be solved. Though, many famous maths problem go from unsolved to solved, and I struggled with high school maths so I certainly would prefer to defer to some impressive reasoning from you, my peers at LessWrong before I abandon my support for MIRI.

Comment author: IlyaShpitser 21 October 2015 01:28:43AM *  6 points [-]

You should worry more about whether MIRI's way of doing problems is a good way of solving hard problems, not how hard the problems are.

Problem difficulty is a constant you cannot affect, social structure is a variable.

Comment author: gjm 21 October 2015 03:58:23PM 2 points [-]

I mostly agree, but: You can affect "problem difficulty" by selecting harder or easier problems. It would still be right not to be discouraged about MIRI's prospects if (1) the hard problems they're attacking are hard problems that absolutely need to be solved or (2) the hardness of the problems they're attacking is a necessary consequence of the hardness (or something) of other problems that absolutely need to be solved. But it might turn out, e.g., that the best road to safe AI takes an entirely different path from the ones MIRI is exploring, in which case it would be a reasonable criticism to say that they're expending a lot of effort attacking intractably hard problems rather than addressing the tractable problems that would actually help.

Comment author: IlyaShpitser 21 October 2015 05:39:14PM *  1 point [-]

MIRI would say they don't have the luxury of choosing easier problems. They think they are saving the world from an imminent crisis.

Comment author: gjm 21 October 2015 09:32:16PM 1 point [-]

They might well do, but others (e.g., Clarity) might not be persuaded.

Comment author: Clarity 22 October 2015 08:28:26AM 0 points [-]

We'll see :)

Comment author: Clarity 22 October 2015 08:26:52AM *  1 point [-]

As I read through the Agenda, I can hear Anna Salamon telling me something along the lines of: if you think something is a rational course of action, the antecedents to that course must neccersarily be rational or you are wrong. She doesn't explain it like that and I cant first that poplar thread but whatever...

Now reviewing the research agenda, there are some things which concern me about their way of doing problem solving. I'd appreciate anyone's input, challenges, clarification and additions:

..We focus on research that cannot be safely delegates to machines

nice sound bite. No quarrel with this. Just wanted to point it out

No AI problem (including the problem of error-tolerant agent design itself) can be safely delegated to a highly intelligent agent that has incentives to manipulate or decieve its programmers

*for the same reason, I won't delegate trust to design friendly AI up to strangers at MIRI alone ;) *

It would be risky to delegate a crucial task before attaining a solid theoretical understanding of exactly what task is being delegated.

this is the critical assumption behind MIRI's approach. Is there any reason to believe this is the case?

It may be possible to use our understanding of ideal Bayesian inference to task a highly intelligent system with developing increasingly e ective approximations of a Bayesian reasoner, but it would be far more dicult to delegate the task of \ nding good ways to revise how con dent you are about claims" to an intelligent system before gaining a solid understanding of probability theory. The theoretical understanding is useful to ensure that the right questions are being asked.

shouldn't establishing this be the very first item in the research agenda, before jumping in to problems they assume are solveable. In fact, the abscence of evidence for them being solveable should be evidence of absence...no?

When constructing intelligent systems which learn and interact with all the complexities of reality, it is not sucient to verify that the algorithm behaves well in test settings. Additional work is necessary to verify that the system will continue working as intended in application.

has it been demonstrated anywhere that formalisms are optimal for exception handling?

Because the stakes are so high, testing combined with a gut-level intuition that the system will continue to work outside the test environment is insucient, even if the testing is extensive.

Is this a legitimate forced choice between pure mathematics and gut level intuition + testing?

MIRI alleges a formal understanding is neccersary for robust AI control, then defines formality as follows:

What constitutes a formal understanding? It seems essential to us to have both (1) an understanding of precisely what problem the system is intended to solve; and (2) an understanding of precisely why this practical system is expected to solve that abstract problem. The latter must wait for the development of practical smarter-than-human systems, but the former is a theoretical research problem that we can already examine.

So first, why aren't they disproving Rice's theorem?

The goal of much of the research outlined in this agenda is to ensure, in the domain of superintelligence alignment|where the stakes are incredibly high|that theoretical understanding comes first.

Okay, show me some data from a very well designed experimenting suggesting theory should come first for the safe development of technology

Honestly, all the MIRI maths and formal logic fetishism got me impressed and awe struck. But I feel like their methodological integrity isn't tight. I reckon they need some quality statisticians and experiment designers to step in. On the other hand, MIRI operates a very very good ship. They market well, fundraise well, movement build well, community build well, they design well, they write okay now (but not in the past!), they get shit done even and they bring together very very good abstract reasoners. And, they have been instrumental, through LessWrong, in turning my life around.

In good faith, Clarity, still trying to be the in-house red team and failing slightly less at it one post at a time.

Comment author: IlyaShpitser 24 October 2015 10:32:43PM 0 points [-]

maths and formal logic

Lots of this going on in the big wide world. Consider looking in more places to deal with selection bias issues.

Comment author: Clarity 25 October 2015 01:04:25AM 0 points [-]

thanks for the lead :) I'll get on to it.

Comment author: MrMind 21 October 2015 06:57:26AM 1 point [-]

Can it be predicted whether a maths problem is solved, solvable, unsolved or unsolvable?

Eh, not really. Rice's theorem.

Comment author: LessWrong 20 October 2015 06:25:53PM 1 point [-]

I have a few questions and think the guys at LW probably can help. I'm not sure LW is the best place to ask this, but I don't really know any other place.

Many people (politicians, famous, or what-have-you) have a website and have a "contact" page. How can I write a message that will have an impact? I'm assuming that: 1. They receive a large volume of email and may not respond or even read it; 2. The mail may not be delivered to them; maybe they have someone else to take care of it for them.

Those are the things that pop out of my head right now, anything else I should double-check?

Now, if those were the preperations, now we have to get the actual cooking done. How can you make an impactful message? Something that will definitely get their attention, something they might just start thinking in the middle of the day. Something that will make them stare at the screen and make them seriously think about it. Most important of all, something that gets them to reply, and a good reply that can make the exchange continue.

I'm willing to put significant effort into this, so don't be afraid to recommend a book or two, or three.

Comment author: Lumifer 20 October 2015 06:36:47PM 3 points [-]

In the usual way: offer them something they want.

Leaving sex aside, the traditional things are money and power. Impactful letters begin like this: "I { control a large voting block | can direct cash flow from a network of donors } and would like to discuss X with you". Oh, and, of course, impactful letters are NOT sent to the "contact page" address.

Comment author: ChristianKl 20 October 2015 07:45:06PM 1 point [-]

My first impulse is that it's worthwhile to focus on actual substance instead of focusing on trying to engage a politician for the sake of influencing a politician.

The second step is making sure that you don't as appear as clueless as the average person who writes the politician. Actually try to understand the positions of the stakeholder in the debate you want to comment on and what the issue is about from the view of the politician.

Third would be to have a role in the debate. You can act as a member of an NGO. You can be a blogger. Failing that you could be a person who edited the Wikipedia page of the politician and who tries to understand the policy of the politician better.

The standard way lobbyists use to get a politicians interest is also to give them campaign donations.

Comment author: airen 19 October 2015 09:28:43AM 1 point [-]

As any other amateur who reads Eliezer’s quantum physics sequence, I got caught up in the “why do we have the Born rule?” mystery. I actually found something that I thought was a bit suspicious (even though lots of people must have thought of it, or experimentally rejected this already.) Note that I'm deep in amateur swamp, and I'll gleefully accept any "wow, you are confused" rejections.

Here is my suggestion:

What if the universes, that we live in, are not located specifically in configuration space, but in the volume stretched out between configuration space and the complex amplitude? So instead of talking about “the probability of winding up here in configuration space is high, because the corresponding amplitude is high”, we would say “the probability of winding up here* is high, because there are a lot of universes here”. And here*, would mean somewhere on the line between a point in configuration space and the complex amplitude for that point. (All these universes would be exactly equal.) And then we completely remove the Born rule. Of course someone thought of this, but responds: “But if we double the amplitude in theory, the line becomes twice as long, and there would be twice as many universes. But this is not what we observe in our experiments, when we double the amplitude, the probability of finding ourselves there multiplies by four!” This is true, if you study a line between the complex amplitude peak and a point in configuration space. But you are never supposed to study a point in configuration space, you are supposed to integrate over a volume in configuration space.

Calculating the volume between the complex amplitude “surface” and the configuration space, is not like taking all the squared amplitudes of all points of the configuration space and summing them up. The reason is that, when we traverse the space in one direction and the complex amplitude changes, the resulting volume “curves”, causing there to be more volume out near the edges (close to the amplitude peak) and less near the configuration space axis.

Take a look at the following image (meant to illustrate an "amplitude volume" for a single physical property): [http://www.wolframalpha.com] , type in: ParametricPlot3D {u Sin[t], u Cos[t], t / 5}, {t, 0, 15}, {u, 0, 1}

Imagine that we’d peer down from above, looking along the property axis. If we completely ignore what happens in the view direction, the volume (the blue areas) would have the shape of circles. If we’d double the amplitude, the volume from this perspective would be quadrupled.

But as it is, what happens along the property axis matters. The stretching out causes the volume to be less than the amplitude squared. It seems that, the higher the frequency is, the closer the volume is to have a square relationship with the amplitude, while as the frequency lowers, the volume approaches a linear relationship with the frequency. Studying the two extreme cases; with frequency 0 the geometric object would be just a straight plane, with an obvious linear relationship between amplitude and volume, while with an “infinite” frequency, the geometric object would become a cylinder, with a squared relationship between volume and amplitude. This means that the overall current amplitude-configuration-space ratio is important, but as far as I know, it is unknown to us.

In a laboratory environment, where all frequencies involved are relatively low, we would see systems evolving linearly. But when we observe the outcome of the systems, and entangle them with everything else, what suddenly matters is the volume of our combined wave which has a very very high frequency.

Or does it? At this point I'm beginning to lose track and the questions starts piling up.

What happens when multiple dimensions are mixed in? I’m guessing that high-frequency/high-amplitude still approaches a squared relationship from amplitude to volume, but I’m not at all certain.

What happens over time as the universe branches, does the amplitude constantly decrease while the length and frequencies remain the same? (Causing the relationship to dilute from squared to linear?)

Note that this suggestion also implies that there really exists one single configuration space / wave function that forms our reality.

So, what do you think?

Comment author: Manfred 19 October 2015 05:14:27PM 0 points [-]

At least one of us is confused about this post :P

It seems like what you're doing is strictly more complicated than just doubling the number of dimensions in state-space and using those extra dimensions only so you can say the amount of "stuff" goes as amplitude squared. Which is already very unsatisfying.

I'm really confused where frequency is supposed to come in.

Comment author: airen 19 October 2015 06:21:14PM 1 point [-]

It's most likely me being confused.

My picture of it right now is that all the dimensions you need in total, are all the dimensions in state-space + 2 dimensions for the complex amplitude. If this assumption is wrong, then we have found the error in my thinking already!

Note that the two complex amplitude dimensions are of course not like the other dimensions. For every position in the state-space, there is a single point in the amplitude dimensions. Or in my suggestion, a line from origo out to the calculated complex value.

Don't try to think this through with matrices, there's a very real chance that what I'm after cannot be captured by matrices at all. I think you have to do a complete geometric picture of it.

Comment author: Clarity 22 October 2015 12:34:52PM *  -2 points [-]

I'm troubleshooting my ongoing failures in sustained, romantic relationships. My social cognition is impaired, and so too is the social cognition of autistic people. With some googling about feelings of inadequacy and asperges, I found a book that documents feelings of inadequacy in asperger's men as they relate to relationships with women that proposes one conditional stimuli (women 'performing' (better in relationships) than them) to explain this phenomenon:

Men (with asperg syndrome)...may suffer feelings of inadequacy if they appear to be performing less well than the women that are around them..

I appreciated this reading since it characterises a conditional that is not usually characterised in similar literature. It ampts my appreciation for primary qualitative literature in mental health. I

Another asperger like trait I have is comparable to alexthemyia but a more general deficit in self-awareness. So, if the author's hypothesis for the conditional stimulis in some aspie's feeling of inadequacy also explains my feeling of inadequacy, I have little or no intuitive feelings of whether that is the case. This makes troubleshooting cognitive biases, and by consequence, using REBT/CBT techniques highly inefficient for me as I have to test out every possible logical fallacy I may or may not be making against all possible corrections. The space is narrowed, of course, by knowledge of the kinds of fallacies similar people tend to make and interventions that tend to work. I wanted to type this out to better wrap my head around my theory about my very slow rate of progress in improving certain aspects of my mental health and social skills. I hope that it is useful for anyone else who is struggling with similar issues since I no of no one else similar enough to me that I can use them as a general point of reference and mentorship for multiple kinds of problems we may share.

My current attitude to relationship strategy given my asperger like relationship issues reflects the position given here that both aspie and (potential) partners can work together to have a successful relationship.

I'm working on other insecurities too, like insecurity around

Comment author: ChristianKl 22 October 2015 04:05:33PM 1 point [-]

Men (with asperg syndrome)...may suffer feelings of inadequacy if they appear to be performing less well than the women that are around them..

If you idealize a woman and seek the perfect woman, you will appear to be performing less well than your image of the person. To avoid that effect it's good to have a relationship with a person where you also see their flaws and you both can be open about your flaws.

Comment author: Clarity 22 October 2015 09:54:37PM 1 point [-]

I had never contemplated this perspective before you suggested it, and it's immediately compelling. Thank you.

Comment author: Clarity 20 October 2015 11:01:34AM -2 points [-]

I'm so over getting super fascinated by someone and thinking they're the sun and moon then talking to them more and realise they're just human like the rest of us...agh. I'm so bad with romance haha. I don't know how to stop idealising people who seem like perfect matches at the time, but then as I talk to them I realise their just a regular person. What can I do about this?

Comment author: Romashka 20 October 2015 12:34:39PM 6 points [-]

Value regular people?

Comment author: ChristianKl 20 October 2015 06:40:49PM 1 point [-]

Having a healthy relationship is about relating to another human being and not about relating to a mental ideal. If you idealize them at the beginning that's okay. It's typical human behavior. You also don't have to have to commit to have a relationship for life to have a valuable relationship.

Comment author: Clarity 29 October 2015 09:00:10PM 0 points [-]

How well can you disambiguate someone's notes to the self? I'd like to calibrate my powers of mentalisation!

Here are some hypothetical goals someone may have. For those that are a unclear or odd, would you like propose what you think they may really be saying and how you came to that conclusion. Can you even infer which of them mean what they say and which mean something secret?

I'll give you feedback, cause I generated and obfuscated the writing myself!

  • Complete remaining non E3 non research units
  • Fight with the lions after touring Turkey
  • Apply for PhD programs in Norway, Germany or UK
  • Network with intelligent Africans
  • Get married and have kids in the Baltic countries
  • Bush tucker tour in south and central Australia
  • hair silver grey then blue
  • Buy €M
  • Investigate Colombian prostitution tolerance zones then Investigate post auc criminal groups (see wiki) networks, organisational structures and psyops & the office of the high counsellor for reintegration.
Comment author: LessWrong 25 October 2015 05:36:35PM *  0 points [-]

Not really sure where to ask but is anyone in contact with Dahlen? We've had a cool discussion but it stopped abruptly and they haven't posted anything for a while nor replied to PMs.

Comment author: Cariyaga 24 October 2015 12:18:26AM 0 points [-]

What website would you suggest for looking into medical research, for someone who's not versed in reading medical literature? I'm specifically looking for any developments or studies into the treatment of urethral strictures for my own reference.

Comment author: ChristianKl 24 October 2015 07:20:08AM 1 point [-]

The Mayo clinic provides good introductionary descriptions: http://www.mayoclinic.org/diseases-conditions/urethral-stricture/basics/definition/con-20037057

But even if you are not versed in reading, if you want to learn about new developments read original papers. If there are obstacles there a LW help desk

Comment author: Panorama 23 October 2015 10:09:20AM 0 points [-]

Final Kiss of Two Stars Heading for Catastrophe

Using ESO’s Very Large Telescope, an international team of astronomers have found the hottest and most massive double star with components so close that they touch each other. The two stars in the extreme system VFTS 352 could be heading for a dramatic end, during which the two stars either coalesce to create a single giant star, or form a binary black hole.

Comment author: Clarity 22 October 2015 12:37:58AM 0 points [-]

Is there any work on developing brain implants or similar for pain moderation in case of sudden injury and you want to down regulate pain so you can think clearly and get help and function?

I don't feel safe knowing I have to wait for an ambulance to get access to serious pain killers and such.

It's probably best, since they are often harmful and liable to abuse, but surely someone is working on solving these issues.

Comment author: ChristianKl 22 October 2015 07:33:31AM 0 points [-]

The human brain is quite capable of shutting down pain without any implants provided you train that ability. No implants needed.

Comment author: Clarity 22 October 2015 10:59:14AM 0 points [-]

Can you guide me down this rabbit hole?

Comment author: ChristianKl 22 October 2015 11:27:13AM 0 points [-]

Dave Elman has a well known process for shutting down pain via hypnosis. I know two people face to face where I know they got their wisdom teeth drawn while shutting off the pain themselves via self-hypnosis.

In CFAR lingo, pain is a very strong signal from system 1 and the fact that system 2 thinks the pain is not useful doesn't mean that system 1 shuts it off. You actually need a very good relationship between the system 1 and system 2 to have that happen.

A good start for that is Gendlin's focusing. Listen to the uncomfortable feelings in your body to release them. As a beginner you likely won't release strong physical pain that way but lesser issues such as a headache can from time to time be released.

Comment author: OrphanWilde 22 October 2015 01:29:53PM 1 point [-]

Move your locus of self to the afflicted space (it helps to close your eyes, and visualize moving your mind to the point; to practice this, if it comes difficult to you, close your eyes, and visualize flying around the room you're in); pain vanishes while you hold it there. Returns, slightly diminished, when you relax your focus. Once you get practiced, you can split your locus of self, and direct threads of attention/self onto painful areas, which diminish with the attention.

That's my description. Your internal descriptions may differ, and/or these instructions may not apply to you in any sense - the internal experience of a mind varies wildly from person to person.

Comment author: ChristianKl 22 October 2015 03:55:47PM 0 points [-]

What kind of results do you achieve with that strategy?

Comment author: OrphanWilde 22 October 2015 05:16:42PM 0 points [-]

Pain in the area of focus fades or vanishes. I'm assuming, by the similar nature implicit between focusing on the pain, and "listening" to the uncomfortable feeling, that there's some kind of similar action taking place there.

Comment author: ChristianKl 22 October 2015 08:02:15PM 0 points [-]

What was the strongest pain to which you successfully applied the technique?

Comment author: OrphanWilde 22 October 2015 08:09:04PM 0 points [-]

A hand I had accidentally dumped boiling liquid over, although the reduction in pain wasn't complete in that case, and it was difficult to maintain concentration. (I couldn't make my attention... large enough? To encompass the entire hand.)

I don't generally apply the technique, because it's usually counterproductive; the problem with pain is that it is distracting me from what I want to pay attention to, so giving it my full attention is just making the problem worse.

Comment author: ChristianKl 22 October 2015 08:20:44PM 0 points [-]

You mean you have to keep up the mental concentration to keep the pain reduction?

Comment author: Clarity 21 October 2015 12:42:14AM *  0 points [-]

The Intelligence Agent's forum looks very active. I'm glad it has taken off.

Can I solicit any reviews about anyone's experience with it so far?

The content itself is beyond me. I'm curious whether I should refer to it intermittently while still learning MIRI's research syllabus or whether the expectation is to have a command of everything before starting. I suspect the latter given the caliber of posts, but that may simply be the founder effect and unintended.

Comment author: Clarity 20 October 2015 11:12:26PM 0 points [-]

It just struck me...I have no idea about the room for more funding considerations for MIRI. Googling after it suggests that question hasn't even been seriously analysed before. Surely I'm missing something...

Comment author: Artaxerxes 21 October 2015 05:55:04AM *  4 points [-]

This post discusses MIRI, and what they can do with funding of different levels.

What are you looking for, more specifically?

Comment author: Clarity 19 October 2015 02:15:59PM 0 points [-]

has anyone tried 'happiness meditation'?

Comment author: [deleted] 19 October 2015 04:06:56PM 4 points [-]

I've done loving kindness meditation, which is sometimes known to induce feelings of extreme joy.

What do you want to know?

Comment author: MrMind 20 October 2015 07:02:37AM 1 point [-]

Yes. I prime myself every day in the morning with love, joy, gratitude, and the such.

Comment author: Clarity 22 October 2015 11:00:03AM 0 points [-]

I have a Gmail, Google Drive, Google Calendar, Facebook and Facebook Messenger apps on my mobile (iphone).

Can I streamline (reduce the number of) my apps without losing functioning?

Comment author: lmm 24 October 2015 05:34:14PM 1 point [-]

This sounds like an XY problem - what are you trying to achieve by reducing the number of apps?

Comment author: Clarity 25 October 2015 12:59:58AM 0 points [-]

XY? What does that refer to? Female chromosomes?

Trying to reduce decision fatigue and streamline my time management. Spending lots of time looking at apps lately.

Comment author: philh 25 October 2015 09:14:37PM 1 point [-]

http://xyproblem.info/

The XY problem is asking about your attempted solution rather than your actual problem.

It's a terrible name, but we seem to be stuck with it.

Comment author: ike 27 October 2015 01:07:27AM 0 points [-]

You can probably do most of what the Facebook app lets you do in safari. You can add a Google calendar to the stock ios calendar app.

You might be able to text whoever you message with messenger, or just use the website.

Gmail can likewise be set up with the stock mail app.

The only app you really need is google drive.

Comment author: polymathwannabe 22 October 2015 02:00:09PM 0 points [-]

In my tablet, I use all those (including the Facebook ones) through Google Chrome. I don't miss the apps at all.

Comment author: SodaPopinski 21 October 2015 10:57:26PM -1 points [-]

Do we know whether quantum mechanics could rule out acausal between partners outside eachother's lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the 'Free will theorem' https://en.wikipedia.org/wiki/Free_will_theorem .

Comment author: lmm 24 October 2015 05:42:51PM 0 points [-]

The whole point of acausal trading is that it doesn't require any causal link. I don't think there's any rule that says it's inherently hard to model people a long way away.

Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.

Comment author: SodaPopinski 24 October 2015 10:51:41PM 0 points [-]

Yes, I understand the point of acausal trading. The point of my question was to speculate on how likely it is that quantum mechanics may prohibit modeling accurate enough to make acausal trading actually work. My intuition is based on the fact that in general faster than light transmission of information is prohibited. For example, even though entangled particles update on each others state when they are outside of each others light cone, it is known that it is not possible transmit information faster than light using this fact.
Now, does mutually enhancing each others utility count as information, I don't think so. But my instinct is that acausal trade protocols will not be possible do to the level of modelling required and the noise introduced by quantum mechanics.

Comment author: lmm 04 November 2015 01:15:18PM 0 points [-]

I don't understand. Computers are able to provide reliable boolean logic even though they're made of quantum mechanics. And any "uncertainty" introduced by QM has nothing to do with distance. You seem very confused.

Comment author: SodaPopinski 04 November 2015 07:30:57PM *  0 points [-]

My question is simply: Do we have any reason to believe that the uncertainty introduced by quantum mechanics will preclude the level of precision in which two agents have to model each other in order to engage in acausal trade?

Comment author: lmm 06 November 2015 08:01:16PM 0 points [-]

No. There are any number of predictable systems in our quantum universe, and no reason to believe that an agent need be anything other than e.g. a computer program. In any case "noise" is the wrong way to think about QM; quantum behaviour is precisely predictable, it's just the subjective Born probabilities that apply.

Comment author: Clarity 21 October 2015 12:55:20AM -1 points [-]

Critical thinking is a responsibility for every intelligent agent, just as the benefaction of the critical thought of others ought to be a right for all sufferable life. Millions of hollocaust victims were at the mercy of men just following orders. Never again

Comment author: NancyLebovitz 21 October 2015 02:06:59PM 3 points [-]

I thought you were going to cite this, which shows a much higher level of critical thinking than most people can manage.

Comment author: Viliam 21 October 2015 11:27:25AM *  2 points [-]

Thank you for the applause lights, but how do we give this "critical thinking" to all the intelligent agents?

We probably can't do that even for the majority of humans in developed world.

Comment author: Clarity 21 October 2015 12:31:16PM *  0 points [-]

Thank you for the applause lights,

Gotta keep the troops' morale up too.

We probably can't do that even for the majority of humans in developed world.

Not with that kind of attitude we can't.

Comment author: Clarity 20 October 2015 10:15:53AM -1 points [-]

Imperialism and a defence of inequality in capitalist republics

Who was is that first articulated the argument that: Since money flows to those can anticipate and predict the need of others, those who get power are those who can do that and therefore if those people are caring then they are the best capable of looking after the rest

And, what strong critiques are available