Filter All time

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: shminux 20 January 2016 06:00:20AM 15 points [-]

Humans are not bad at math. We are excellent at math. We can calculate the best trajectory to throw a ball into a hoop, the exact way to move our jiggly appendages to achieve it, accounting for a million little details, all in a blink of an eye. Few if any modern computers can do as well.

The problem is one of the definition: we call "math" the part of math that is HARD FOR HUMANS. Because why bother giving a special name to something that does not require special learning techniques?

Comment author: Lumifer 12 January 2016 07:33:30PM *  16 points [-]

A physics research team has members who can (and occasionally do) in secret insert false signals into the experiment the team is running. The goal is practice resistance to false positives. A very interesting approach, first time I've heard about physicists using it.

Bias combat in action :-)

The LIGO is almost unique among physics experiments in practising ‘blind injection’. A team of three collaboration members has the ability to simulate a detection by using actuators to move the mirrors. “Only they know if, and when, a certain type of signal has been injected,”...

Two such exercises took place during earlier science runs of LIGO, one in 2007 and one in 2010. ... The original blind-injection exercises took 18 months and 6 months respectively. The first one was discarded, but in the second case, the collaboration wrote a paper and held a vote to decide whether they would make an announcement. Only then did the blind-injection team ‘open the envelope’ and reveal that the events had been staged.

Source

Comment author: RichardKennaway 09 January 2016 08:31:33PM 13 points [-]

let us assume, that the top leadership of ISIS is composed of completely rational and very intelligent individuals

Of the sort that casebash assures us cannot exist? The imaginary competence of fictional rational heroes? Top human genius level?

No. These all amount to assuming a falsehood.

  1. The premise of this article is wrong. The ISIS are really just a bunch of idiots, and their apparent successes are only caused by the powers in the region being much more incompetent than ISIS

Another straw falsehood to set beside the first one. All of this rules out from the start any consideration of ISIS as they actually are. They are real people with a mission, no more and no less intelligent than anyone else who succeeds in doing what they have done so far.

There is no mystery about what ISIS wants. They tell the world in their glossy magazine, available in many languages, including English (see the link at the foot of that page). They tell the world in every announcement and proclamation.

"Rationalist", however, seem incapable of believing that anyone ever means what they say. Nothing is what it is, but a signal of something else.

I have not seen any reason to suppose that they do not intend exactly what they say, just as Hitler did in "Mein Kampf". They are fighting to establish a new Caliphate which will spread Islam by the sword to the whole world, Allahu akbar. All else is strategy and tactics. If their current funding model is unsustainable, they will change it as circumstances require. If their recruitment methods falter, they will search for other ways.

More useful questions would be: given their supreme goal (to establish a new Caliphate which will spread Islam by the sword to the whole world), what should they do to accomplish that? And how should we (by which I mean, everyone who wants Islamic universalism to fail) act to prevent them?

I recommend a reading of Max Frisch's play "The Fire Raisers".

In response to comment by [deleted] on Open Thread, Dec. 28 - Jan. 3, 2016
Comment author: Viliam 29 December 2015 09:09:48PM *  16 points [-]

Your 'easiest way' feels to me like: "If you are low-status, and you want to change it, aim for middle status, not high status." Which in my opinion is an excellent advice. Because if you succeed at this, you can try the higher status later, and it will feel more comfortable. But many people consistently keep aiming higher than they can afford, and then they predictably fail. Now that I think about it, it applies to so many areas of life -- people trying to run before they can walk, which ultimately leaves them unable to either walk or run.

People probably fail to notice this strategy because they see the situation as a dichotomy between "low status" and "high status", as if any deviation from the highest observed status means they remain at the bottom.

All of the following behaviors are not highest status:

  • Joining an existing group, instead of creating your own, or waiting for the group to form spontaneusly around you.
  • Learning the norms of the group, instead of expecting the group to forgive you all transgressions.
  • Taking interest in the topics of the group, instead of expecting the group to switch to the topics that interest you.
  • Following the group consensus, instead of signalling your uniqueness by disagreeing with it.
  • Working hard, instead of displaying that you don't have to work hard.
  • Talking about interesting and relevant things, instead of expecting people to admire you regardless of what you say.

And that's exactly why a person starting at the bottom should do them, because it will bring them to the middle. Actually, this strategy would bring the average person to the middle; the highly intelligent people will end up above the middle, because their intelligence will allow them to perform better at these things.

Comment author: Viliam 24 November 2015 09:11:30AM 16 points [-]

The first association I have with your username is "spams Open Threads with not really interesting questions".

Note that there are two parts in that objection. Posting a boring question in an Open Thread is not a problem per se -- I don't really want to discourage people from doing that. It's just that when I open any Open Thread, and there are at least five boring top-level comments by the same user, instead of simply ignoring them I feel annoyed.

Many of your comments are very general debate-openers, where you expect others to entertain you, but don't provide anything in return. Choosing your recent downvoted question as an example:

How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?

First, how do you estimate "threats and your ability to cope"? If you ask other people to provide their data, it would be polite to provide your own.

Second, what is your goal here? Are you just bored and want to start a debate that could entertain you? Or are you thinking about a specific problem you are trying to solve? Then maybe being more specific in the question could help to give you more relevant answer. But the thing is, your not being specific seems like an evidence for the "I am just bored and want you to entertain me" variant.

Comment author: VoiceOfRa 23 November 2015 02:59:03AM 10 points [-]

This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

Given that your idea of "rational thinking" appears to consist of the kind of Straw-Vulcanism that gives "rational thinking" a bad name, I'd appreciate it if you would stop trying to "help" the movement.

Comment author: Gleb_Tsipursky 18 November 2015 11:29:42PM *  13 points [-]

Thank you for bringing this up as a topic of discussion! I'm really interested to see what the Less Wrong community has to say about this.

Let me be clear that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline. We do not assume that all who engage with out content will get to the level of being aspiring rationalists who can participate actively with Less Wrong. This is not to say that it doesn't happen, and in fact some members of our audience have already started to do so, such as Ella. Others are right now reading the Sequences and are passively lurking without actively engaging.

I want to add a bit more about the Intentional Insights approach to raising the sanity waterline broadly.

The social media channel of raising the sanity waterline is only one area of our work. The goal of that channel is to use the strategies of online marketing and the language of self-improvement to get rationality spread broadly through engaging articles. To be concrete and specific, here is an example of one such article: "6 Science-Based Hacks for Growing Mentally Stronger." BTW, editors are usually the ones who write the headline, so I can't "take the credit" for the click-baity nature of the title in most cases.

Another area of work is publishing op-eds in prominent venues on topical matters that address recent political matters in a politically-oriented manner. For example, here is an article of this type: "Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP."

Another area of work is collaborating with other organizations, especially secular ones, to get our content to their audience. For example, here is a workshop we did on helping secular people find purpose using science.

We also give interviews to prominent venues on rationality-informed topics: 1, 2.

Our model works as follows: once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. As an example, after the article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds. Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can't say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.

The articles we put out on other media channels and on which we collaborate with other groups are more oriented toward entertainment and less oriented toward education in rationality, although they do convey some rationality ideas. For those who engage more thoroughly with out content, we then provide resources that are more educationally oriented, such as workshop videos, online classes, books, and apps, all described on the "About Us" page. Our content is peer reviewed by our Advisory Board members and others who have expertise in decision-making, social work, education, nonprofit work, and other areas.

Finally, I want to lay out our Theory of Change. This is a standard nonprofit document that describes our goals, our assumptions about the world, what steps we take to accomplish our goals, and how we evaluate our impact. The Executive Summary of our Theory of Change is below, and there is also a link to the draft version of our full ToC at the bottom.

Executive Summary 1) The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing. 2) To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice. 3) We assume that: - Some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions. - Problematic decision making undermines mutual flourishing in a number of life areas. - These flawed thinking, feeling, and behavior patterns can be improved through effective interventions. - We can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment. 4) Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing. 5) Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations. 6) Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the draft version of our Theory of Change.

Also, about Endless September. After people engage with our content for a while, we introduce them to more advanced things on ClearerThinking, and we are in fact discussing collaborating with Spencer Greenberg, as I discussed in this comment. After that, we introduce them to CFAR and Less Wrong. So those who go through this chain are not the kind who would contribute to Endless September.

The large majority we expect would not go through this chain. They instead engage in other venues with rational thinking, as Viliam mentioned above. This fits into the fact that my goal, and that of Intentional Insights as a whole, is about raising the sanity waterline, and only secondarily getting people to the level of being aspiring rationalists who can participate actively with Less Wrong.

Well, that's all. Look forward to your thoughts! I'm always looking looking for better ways to do things, so very happy to update my beliefs about our methods and optimize them based on wise advice :-)

EDIT: Added link to comment where I discuss our collaboration with Spencer Greenberb's ClearerThinking and also about our audience engaging with Less Wrong such as Ella.

Comment author: VoiceOfRa 12 October 2015 10:20:36PM 6 points [-]

I've found that I can tolerate bigotry a lot better than I can tolerate bigoted policy proposals

What definition of "bigotry" are you using? The "standard definition" amounts to "applying Bayesian priors to people". So is discussion of the policy implications of Bayesian reasoning now punishable by banning without notice? Also since you admit that he didn't actually make the proposal but was "close to suggesting" it does that mean that even being "close to suggesting" implications of Bayesian reasoning for policy is bannable?

Note to Eliezer or any super-administrators reading this: I strongly suggest that in the interest of keeping LessWrong a place where people can discuss rationality without fear of suddenly being banned, NancyLebovitz's administrative privileges be revoked immediately.

Comment author: bogus 06 October 2015 06:26:49PM *  15 points [-]

I also think that this sets a very murky precedent. I don't disagree at all with banning AA if it turns out he has abused voting privileges, but so far there's no hard evidence that he did. Putting that aside for now, all we're left with is a block being based on whether some individual moderator "can tolerate" some controversial comment (meaning that it attracts both downvotes and upvotes, as far as the LW userbase is concerned). This strikes me as careless.

Comment author: VoiceOfRa 07 September 2015 06:43:27PM 16 points [-]

Truly, there would be no refugee crisis in Europe if the refugees were unable to enter Europe. Instead, there would be a refugee crisis elsewhere.

Well using the term "refugee" here is misleading. Note that 75% of the refugees are men. So either they feel that the places they're leaving are safe for women and children, or their main motivation isn't escaping danger.

There may be an argument that we, the states of Europe, should build strong walls and cultivate our own gardens within them, but observing that strong walls would allow us to cultivate our own gardens is not that argument. And in the longer run, strong walls may not be enough.

Well the strong walls are doing a remarkably good job of keeping them out of the gulf states.

In response to Deworming a movement
Comment author: gjm 30 August 2015 11:07:32AM 15 points [-]

You could stand to be more explicit in your reasoning. At the moment it seems to go like this:

  • One paper that found big benefits from deworming has recently been subject to criticism, which criticism has in turn been criticized, etc.
  • GiveWell posted some discussion of the debate that you think doesn't engage with the meat of the issue.
    • It seems to me to engage with quite a lot, and you don't say what it is you think they aren't engaging with.
  • Therefore GiveWell are inept.
  • Therefore, giving to GiveWell's recommended charities is worse than, and I quote, buying healthier food for yourself.

It seems to me that no part of this argument makes much sense. There are intelligent experts on both sides of the "worm wars"; GiveWell's reasoning doesn't seem obviously crazy to me (their support for deworming was never entirely based on M&K's findings; many of those findings hold up under the currently-debated reanalysis; the EA case for deworming was always that although the effectiveness of deworming is highly uncertain it's really really cheap and the current estimates of its cost-effectiveness would need to be too high by an order of magnitude or more for it to stop being better than, e.g. giving money directly to the beneficiaries -- which, you may note, is an intervention that GiveWell are also recommending, so it's not as if they aren't discounting deworming somewhat on account of the uncertainty); none of this seems to justify the extremely strong words you use.

If you have a more detailed argument that actually gets from the available evidence to "the EA movement is hopelessly messed up", let's hear it. That would be interesting and important. But for the moment I'm afraid I've got you in the same mental pigeonhole as others who've come along to LW and said "I've found some nits to pick with something EAs tend to approve of; therefore we should give up the whole idea of charity and concentrate on benefitting ourselves".

Comment author: Viliam 19 August 2015 07:34:16AM 16 points [-]

I like this part from a MIRI blog article:

The problem isn’t Terminator. It’s “King Midas.” King Midas got exactly what he wished for — every object he touched turned to gold. His food turned to gold, his children turned to gold, and he died hungry and alone.

When people insist on using fictional evidence, at least give them one that matches your concerns.

(It will probably also help that Kind Midas is higher status than Terminator.)

Comment author: CellBioGuy 17 August 2015 06:51:30PM *  16 points [-]

A quick update: My astrobiology posts are still coming, but after conferring with Toggle and being surprised with a massive response to my first blog posts (who linked me onto ycombinator? thousands of views in two days) I am going through a bit more research about the geochemical history of Earth and other solar system objects in an attempt to be as rigorous as possible. I only play an astrobiologist on TV so to speak, day to day my focus is a little smaller.

Comment author: pianoforte611 11 August 2015 10:42:14PM *  13 points [-]

I dislike these threads. They encourage and reward ill thought out contrarian (often straight up crackpot) ideas. Correcting them is a large cost, in part because convincing an audience doesn't require arguing things that are true, it merely requires arguing things that take more time to refute than to assert. I'd rather not get tangled up in object level for this reason by citing real examples but here is an example of the kind of idea I would expect to see here.

Made up crazy idea (that I expect some people here would endorse):

"Get rid of research ethics boards, they prevent useful research from being done that would benefit society out of an ill founded fear of us becoming the Nazis"

This sort of argument ignores the history behind why research ethics boards exist, and is usually asserted by people who are ignorant of the actual guidelines that research ethics boards abide by. It's also usually asserted without knowledge of the actual abuses of patient trust that were committed before research ethics guidelines were established), which include withholding known treatments and doing liver toxicity study in children without telling them (quite an extensive one in which biopsies were taken, and upon recovery, liver toxicity was re-induced leading to damage lasting at least a month).

(Of course, it took me much longer to write that response than to make the initial claim)

Comment author: RolfAndreassen 02 August 2015 05:39:42AM 16 points [-]

I got a new job! Which pays better than the old one.

Comment author: Lumifer 02 August 2015 12:01:12AM *  15 points [-]

I think this is mostly a function of the subculture to which you belong and, specifically, which things you find interesting, exciting, important, etc. IQ, of course, is a major underlying factor, but it's not just IQ.

Each subculture also has its own social rituals and implicit communication methods so when you cross over to a different one you are very likely to have conversation difficulties -- unless your social skills are highly developed and you have some idea about how that subculture works.

Comment author: philh 01 August 2015 08:08:17PM 16 points [-]

Planned and executed a four-day solo hike. I did half of the south downs way, from Eastbourne to Amberley. That's 83 km of route, I probably walked about 90km in total, taking a shy over 72 hours, camping on or just off the trail in wherever seemed like a good location. I've never done anything like this before. It took less than two weeks from when I decided to go, to when I set off.

Comment author: James_Miller 31 July 2015 03:33:37PM *  13 points [-]

To generate political support for exterminating mosquitoes that bite people, we should spread a rumor that besides killing 725,000 humans each year, this insect also threatens a few cute Zimbabwe lions.

Comment author: [deleted] 16 July 2015 11:52:45AM 16 points [-]

People do this as well. They wanted to eliminate corruption from public construction projects in a certain country, and created a numbers-based evaluation systems of tenders. The differences in price offered were taken into account with a weight of 1 and the differences in penalties / liquidated damage with a weight of 6. I am not sure what is the best English term for the later, but basically it was the construction company saying if the project is late I am willing to pay X amount of penalty per day. Usually most companies offer something like 0,1% of the price. One company offered 2% which means if they are like 10-15 days late their whole profits are gone, and as this was to be taken into account with a weight of 6, they could offer an outrageous price and the rules still forced the government to accept their offer. It turned out, it was not just a bold gaming of the rules, it was corruption as well: there was no such law that such a penalty offered must also be really enforced in case of late delivery, the government's man can decide to demand less penalty if he feels the vendor is not entirely at fault. So most likely they simply planned to bribe that guy in case if they are late. Thus the new rules simply moved the bribery into a different stage of the process.

When humans are motivated by entirely external incentives like fsck everything let's make as much money on this project as possible, they behave just like the vibrating AI-Messi.

Which means - maybe we need to figure out what the heck is an inner motivation in humans that makes them want to the sensible and how to emulate it.

Comment author: DanielLC 14 July 2015 02:52:45AM 16 points [-]

I got an asset for Unity published. It's called HexGrid. It's basically an engine for tactical RPGs/wargames on a hexagonal grid.

In response to Crazy Ideas Thread
Comment author: michaelkeenan 09 July 2015 05:22:41PM 14 points [-]

Ban music in political campaign advertisements. Music has no logical or factual content, and only adds emotional bias.

Here's an example of an ad with music intended to give two different emotional tones (optimistic/patriotic in the first six seconds, then sinister in the rest).

Comment author: Lumifer 08 July 2015 04:36:16PM *  16 points [-]

EVE already works sufficiently like that.

In response to Crazy Ideas Thread
Comment author: James_Miller 07 July 2015 11:04:25PM 16 points [-]

To learn about the genetic and environmental basis of intelligence study children who have significantly higher IQs than either of their biological parents.

In response to comment by [deleted] on Open Thread, Jun. 8 - Jun. 14, 2015
Comment author: TezlaKoil 08 June 2015 09:43:34PM *  16 points [-]

Is such a long answer suitable in OT? If not, where should I move it?

tl;dr Naive ultrafinitism is based on real observations, but its proposals are a bit absurd. Modern ultrafinitism has close ties with computation. Paradoxically, taking ultrafinitism seriously has led to non-trivial developments in classical (usual) mathematics. Finally: ultrafinitism would probably be able to interpret all of classical mathematics in some way, but the details would be rather messy.

1 Naive ultrafinitism

1.1. There are many different ways of representing (writing down) mathematical objects.

The naive ultrafinitist chooses a representation, calls it explicit, and says that a number is "truly" written down only when its explicit representation is known. The prototypical choice of explicit representation is the tallying system, where 6 is written as ||||||. This choice is not arbitrary either: the foundations of mathematics (e. g. Peano arithmetic) use these tally marks by necessity.

However, the integers are a special^1 case, and in the general case, the naive ultrafinitist insistance on fixing a representation starts looking a bit absurd. Take Linear Algebra: should you choose an explicit basis of R3 that you use indiscriminately for every problem; or should you use a basis (sometimes an arbitary one) that is most appropriate for the problem at hand?

1.2. Not all representations are equally good for all purposes.

For example, enumerating the prime factors of 2*3*5 is way easier than doing the same for ||||||||||||||||||||||||||||||, even though both represent the same number.

1.3. Converting between representations is difficult, and in some cases outright impossible.

Lenstra earned $14,527 by converting the number known as RSA-100 from "positional" to "list of prime factors" representation.

Converting 3\^\^\^3 from up-arrow representation to the binary positional representation is not possible for obvious reasons.

As usual, up-arrow notation is overkill. Just writing the decimal number 100000000000000000000000000000000000000000000000000000000000000000000000000000000 would take more tally-marks than the number of atoms in the observable universe. Nonetheless, we can deduce a lot of things about this number: it is an even number, and its larger than RSA-100. Nonetheless, I can manually convert it to "list of prime factors" representation: 2\^80 * 5\^80.

2 Constructivism

The constructivists were the first to insist that algorithmic matters be taken seriously. Constructivism separates concepts that are not computably equivalent. Proofs with algorithmic content are distinguished from proofs without such content, and algorithmically inequivalent objects are separated.

For example, there is no algorithm for converting Dedekind cuts to equivalence classes of rational Cauchy sequences. Therefore, the concept of real number falls apart: constructively speaking, the set of Cauchy-real numbers is very different from the set of Dedekind-real numbers.

This is a tendency in non-classical mathematics: concepts that we think are the same (and are equivalent classically) fall apart into many subtly different concepts.

Constructivism separates concepts that are not computably equivalent. Computability is a qualitative notion, and even most constructivists stop here (or even backtrack, to regain some classicality, as in the foundational program known as Homotopy Type Theory).

3. Modern ultra/finitism

The same way constructivism distinguished qualitatively different but classically equivalent objects, one could starts distinguishing things that are constructively equivalent, but quantitatively different.

One path leads to the explicit approach to representation-awareness. For example, LNST^4 explicitly distinguishes between the set of binary natural numbers B and the set of tally natural numbers N. Since these sets have quantitatively different properties, it is not possible to define a bijection between B and N inside LNST.

Another path leads to ultrafinitism.

The most important thinker in modern ultra/finitism was probably Edward Nelson. He observed that the "set of effectively representable numbers" is not downward-closed: even though we have a very short notation for 3\^\^\^3, there are lots of numbers between 0 and 3^^^3 that have no such short representation. In fact, by elementary considerations, the overwhelming majority of them cannot ever have a short representation.

What's more, if our system of notation allows for expressing big enough numbers, then the "set of effectively representable numbers" is not even inductive because of the Berry paradox. In a sense, the growth of 'bad enough' functions can only be expressed in terms of themselves. Nelson's hope was to prove the inconsistency of arithmetic itself using a similar trick. His attempt was unsuccessful: Terry Tao pointed out why Nelson's approach could not work.

However, Nelson found a way to relate unexpressibly huge numbers to non-standard models of arithmetic^(2).

This correspondence turned out to be very powerful, leading to many paradoxical developments: including finitistic^3 extension of Set Theory; a radically elementary treatment of Probability Theory and a new ways of formalising the Infinitesimal Calculus.

4. Answering your question

What kind of mathematics would we still be able do (cryptography, analysis, linear algebra …)?

All of it; modulo translating the classical results to the subtler, ultra/finitistic language. This holds even for the silliest versions of ultrafinitism. Imagine a naive ultrafinitist mathematician, who declares that the largest number is m. She can't state the proposition R(n,2^(m)), but she can still state its translation R(log_2 n,m), which is just as good.

Translating is very difficult even for the qualitative case, as seen in this introductory video about constructive mathematics. Some theorems hold for Dedekind-reals, others for Cauchy-reals, et c. Similarly, in LNST, some theorems hold only for "binary naturals", others only for "tally naturals". It would be even harder for true ultrafinitism: the set of representable numbers is not downward-closed.

This was a very high-level overview. Feel free to ask for more details (or clarification).


^1 The integers are absolute. Unfortunately, it is not entirely clear what this means.

^2 coincidentally, the latter notion prompted my very first contribution to LW

^3 in this so-called Internal Set Theory, all the usual mathematical constructions are still possible, but every set of standard numbers is finite.

^4 Light Naive Set Theory. Based on Linear Logic. Consistent with unrestricted comprehension.

Comment author: ChristianKl 06 June 2015 11:09:04AM 15 points [-]

Unfortunately, no-one knows how to turn poor African countries into productive Western ones, short of colonization.

The UN millennium goals of halving the amount of people in extreme poverty from 1990 to 2015 was successfully achieved 5 years ahead of schedule. We achieved that without any colonization.

Comment author: advancedatheist 01 June 2015 04:08:08AM *  0 points [-]

This illustrates the problem I have with how we leave boys' sexual development to the haphazard and just hope that they can figure it out somehow. What about the boys who can't or don't have these experiences and learn these skills at an appropriate age?

Sex And The Valley: Tech Guys Seek Expert Love Advice From Therapists

http://www.vocativ.com/culture/society/the-sex-therapists-of-silicon-valley/

“Dan” seems at first to perfectly embody that popular object of scorn these days in San Francisco: the privileged tech worker. He’s a developer-turned-manager at a thriving startup, the type of guy you would expect to see dodging protesters at a Google bus stop or evicting low-income tenants in order to build his dream condo. But beyond that veneer of untouchable privilege, there is a soft underbelly. He’s a 40-year-old virgin, and his troubles with women are bad enough that he’s sought out a sex therapist for help.

Comment author: [deleted] 21 May 2015 11:10:03PM *  14 points [-]

Despite Yudkowsky's obvious leanings, the Sequences are ... first and foremost about how to not end up an idiot

My basic thesis is that even if that was not the intent, the result has been the production of idiots. Specifically, a type of idiotic madness that causes otherwise good people, self-proclaimed humanitarians to disparage the only sort of progress which has the potential to alleviate all human suffering, forever, on accelerated timescales. And they do so for reasons that are not grounded in empirical evidence, because they were taught though demonstration modes of non-empirical thinking from the sequences, and conditioned to think this was okay through social engagement on LW.

When you find yourself digging a hole, the sensible and correct thing to do is stop digging. I think we can do better, but I'm burned out on trying to reform from the inside. Or perhaps I'm no longer convinced that reform can work given the nature of the medium (social pressures of blog posts and forums work counter to the type of rationality that should be advocated for).

I don't care about Many Worlds, FAI, Fun theory and Jeffreyssai stuff, but LW was the thing that stopped me from being a complete and utter idiot.

I don't want to take that away. But for me LW was not just a baptismal fount for discovering rationality, it was also an effort to get people to work on humanitarian relief and existential risk reduction. I hope you don't think me crazy for saying that LW has had a subject matter bias in these directions. But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky's specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.

We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.

If you had your own forum with lots of people, who share similar criticism of LW, hey, I'd go there and leave LW. But you don't have such forum, so by leaving LW you just leave people like me alone. What's the point of that? Do you really believe leaving LW like that is more utility, than trying to create an island within it?

I am myself working on various projects in my life which I expect to have positive effects on the world. Outside of work, LW has at times occupied a significant fraction of my leisure time. This must be seen as an activity of higher utility than working more hours on my startup, making progress on my molecular nanotech and AI side projects, or enriching myself personally in other ways (family time, reading, etc.). I saw the Rationality reading group as a chance to do something which would conceivably grow that community by a measurable amount, thereby justifying a time expenditure. However if all I am doing is bringing more people into a community that is actively working against developments in artificial intelligence that have a chance of relieving human suffering within a single generation… the Hippocratic corpus comes to mind: “first, do no harm.”

I am not sure yet what I will fill the time with. Maybe I'll get off my butt and start making more concrete progress on some of the nanotech and AI stuff that I have been letting slide in recent years.

I recognize also that I am making broad generalizations which do not always apply to everyone. You seem to be an exception, and I wish I had engaged with you more. I also will miss TheAncientGeek's contrarian posts, as well as many others who deserve credit for not following a herd mentality.

Comment author: Dustin 21 May 2015 10:38:23PM 15 points [-]

it is also curious to me that many here point me to articles written by Eliezer Yudkowsky to support their arguments

It's been my experience that this is usually done to point to a longer and better-argued version of what the person wants to say rather than to say "here is proof of what I want to say".

I mean, if I agree with the argument made by EY about some subject, and EY has done a lot of work in making the argument, then I'm not going to just reword the argument, I'm just going to post a link.

The appropriate response is to engage the argument made in the EY argument as if it is the argument the person is making themselves.

Comment author: Vaniver 18 May 2015 10:07:50PM 16 points [-]

I like your proposal. I note, in addition, that the best introduction is probably linking people to particular articles that you think they may find interesting, instead of the main page (which is what someone will find if you just tell them the name).

Comment author: [deleted] 13 May 2015 03:05:59AM 16 points [-]

Finished my doctorate in mathematics, in PDEs.

Started my new job at a private research lab.

Comment author: palladias 11 May 2015 06:37:36PM 16 points [-]

I published a book! And Amazon ran out on the second day of it's release!

My book, Arriving At Amen: Seven Catholic Prayers that Even I Can Offer explains how I learned seven kinds of Catholic prayer after conversion.

I can promise it's the LW-iest book you've got to read on prayer, so, if you want to better understand a religious friend or have some ways to open a conversation, you might like it. Plus it cites Ender's Game and Terry Pratchett.

I had to learn prayer in the language of reference I spoke, so my chapter on Confession has a big section on the Sunk Cost Fallacy, and how it makes us afraid to make our sins "real" by acknowledging them. The chapter on Mass explains the communion of saints by referring to cartesian coordinate systems and explaining how people can all be aligned along one dimension of interest.

I had a great time writing this, and, I should mention, Beeminder helped me pull it off!

Comment author: erratio 11 May 2015 03:14:03AM 16 points [-]
  • I finally finished all the work to leave grad school with an MA, and am officially graduated in just over a week's time.

  • I am getting a visa so that I'll be able to move to my boyfriend's country and move in with him. If you're in doubt about how awesome that is, you should see the mountain of bureaucracy I've had to deal with...

Comment author: [deleted] 11 May 2015 02:57:40AM 16 points [-]

I read about an art exhibition / contest, made an entry, sent it in, it was accepted and exhibited. I didn't win the prize but the exhibit has been in the international news for weeks. Two men apparently tried to murder those who were in attendance at the exhibition. I wrote an essay about the experience, submitted it to a publisher, it was accepted and published. I was paid for my essay.

http://freethinker.co.uk/2015/05/07/trigger-warning/

Comment author: Morendil 10 May 2015 01:32:03PM *  16 points [-]
  • A couple months ago I started learning the Elm programming language, and to make things interesting I resolved to push one non-empty code commit to GitHub every single day (ideally also non-trivial, but not everyone's definition of "trivial" will match mine). I'm now on day 67 of that streak, having written six proto-games (playable here if you're so inclined, though they're not hugely entertaining). So far the habit has resisted a new job and a ten-day vacation. I've also been keeping a daily journal since Feb 21.

  • Used my 3D printer (Prusa i3) to print the entire set of plastic printed parts for a different printer (FoldaRap), very much a non-trivial project (~ 50h of printing for 30+ distinct parts) that requires a well-tuned printer. I'm particularly proud as this comes on the tails of completing a major conversion of the Prusa from its original direct-extrusion design to a Bowden setup.

  • Got hired by the French government to promote a more agile style of programming and project management.

Comment author: Dentin 07 May 2015 07:30:03PM 15 points [-]

I fail to see why this is important enough to be a post. The problem is almost certainly politics and market distortion, not 'lack of water':

http://www.economist.com/news/united-states/21647994-why-golden-state-so-bad-managing-water-price-wrong

"Mr Brown put his foot on urban hosepipes while letting farmers carry on merrily wasting water, for which they pay far less than urbanites. Agriculture sucks up about 80% of the state’s water (excluding the half that is reserved for environmental uses). Farmers have guzzled ever more water as they have planted thirsty crops such as almonds, walnuts, and grapes. Meanwhile, urban water use has held relatively steady over the past two decades, despite massive population growth, thanks to smart pricing and low-flow toilets. Per-capita water use in California has declined from 232 gallons a day in 1990 to 178 gallons a day in 2010."

Comment author: Viliam 13 April 2015 12:15:04PM *  16 points [-]

We are invited here to attribute various "dangerous" ideas to Thiel. And he couldn't even deny them because, well, that's exactly what he would do if it was his dangerous idea, wouldn't he?

In other words, the rules of this game are: "Invent a controversial political idea and pretend that it is the idea Peter Thiel is trying to hide." No falsifiability; except for a possible group opinion that something is completely out of character. You get points for the idea being controversial; you don't lose points if it is not Thiel's idea. So why not simply post the most controversial idea you have?

We are invited to abuse the man's name as a pretext to publish our controversial ideas. Why not use our own names then? I suspect this is what people will do here anyway. They will just use Thiel's name to add status to their own ideas.

Comment author: Dutchmo 25 March 2015 06:49:54PM 16 points [-]

Folks growing up in the '50s, '60s, '70s, and early '80s will remember the existential threat of that era: communism.

ISIS has a long way to go before it can capture the fear of total nuclear annihilation. The vietnam war, the korean war, the cold war, domino theory, the day after, red dawn, war games, duck-and-cover drills in school...

Whenever I hear hand-wringing from these youngsters about ISIS, I have to chuckle. Back in my day we had real existential threats, sonny.

Now I'm going back to watch my copy of Rocky iV. On VHS of course.

Comment author: fezziwig 10 March 2015 11:34:23PM 16 points [-]

Man, that's beautiful. What does Bellatrix Black want most, that Harry can offer?

She wants Tom Riddle to love her.

Comment author: buybuydandavis 10 March 2015 08:33:58PM *  16 points [-]

I like "The Girl Who Lived Again".

Comment author: gattsuru 08 March 2015 08:52:12PM *  16 points [-]

Quirrel had seen Harry use /Diffendo/ on some trees, and later that the trees have been cut. He was unconscious (and in an extradimensional bag) when Harry had cut through the wall in Azkaban, and only saw a cut circle of wall. He may not have known that Harry had anything up his sleeve more complicated than a Cutting Charm; he certainly had no reason to believe that Harry could wordlessly transfigure the tip of his wand into well over a hundred feet of braided carbon nanotubes. Quirrelmort has never seen -- only Dumbledore, Hermoine, and Professor McGonagall have.

But that's really not the part that got him.

Quirrelmort had accepted the risk that Harry could have escaped, or killed everyone present, just as he accepted the possibility that the Unbreakable Vow wouldn't have been enough to stop Harry from destroying the world. If he were absolutely certain, he'd not have bothered with backup plans. He did not care about the deaths of the present Death Eaters, and losing his own body was merely a minor setback. It's Harry's ability to instantly and permanently incapacitate without letting Quirrelmort's spirit loose that made the threat serious. That's a problem Dumbledore was relying on an ancient and frighteningly powerful artifact to implement, and Quirrelmort's mode of thinking doesn't exactly encourage thinking of these matters..

Comment author: Khoth 08 March 2015 08:29:44PM *  16 points [-]

Why on earth is Prof McGonagall announcing in public that a bunch of children's parents are dead and were evil? That seems a really, really terrible way to break the news to them.

I'd expect at the very least she'd tell them privately in advance, and probably wouldn't say it in public at all, except in very general terms.

Comment author: dxu 07 March 2015 12:34:05AM *  16 points [-]

Since this seems like a pretty transparent metaphor for Friendly AI, it looks as though Eliezer is planning to go through with his idea of crowdsourcing FAI research. Any predictions for how this is going to go? I'm personally not optimistic that the subreddit is actually going to produce any important, novel results*, but at the very least, it'll increase exposure to the idea of FAI with a general audience. (After all, HPMoR was what originally brought me to LW.)


* It seems to me that the main strength of crowdsourcing in solving problems is the ability to propose a truly gigantic amount of solutions in a very short amount of time, which only helps if (a) the true solution is easy enough to guess that someone can stumble upon it largely by chance, (b) other people then recognize the solution as a good one and upvote it, and (c) the solution is easily testable to see if it is a good one or a bad one (otherwise people will keep on proposing solutions without realizing that they've already stumbled across the right answer). All three of these were true of HPMoR; all of them are probably false in the context of FAI research.

Comment author: CellBioGuy 05 March 2015 02:18:56AM 16 points [-]

The Boy who Lived kills 30 high ranking witches and wizards, resurrects his first kiss, fakes the death of Voldemort and wears him as a ring!

Comment author: JenniferRM 01 March 2015 05:11:18AM *  15 points [-]

Just finished reading. Wow! This story is so bleak. I suspect Voldemort just "identity raped" Harry into becoming an Unfriendly Intelligence? Or at least a grossly grossly suboptimal one. Harry himself seems to be dead.

I'm going to call him HarryPrime now, because I think the mind contained in Riddle2/Harry's body before and after this horror was perpetrated should probably not be modeled as "the same person" as just prior to it.

HarryPrime is based on Harry (sort of like an uploaded and modified human simulation is based on a human) but not the same, because he has been imbued with a mission that he must implacably pursue, that has Harry's identity (and that of the still unconscious(!) and never interviewed(!) Hermione) woven into it as part of its motivational structure, in a sort of twist on coherent extraplotated volition.

"if we knew more, thought faster, were more the people we wished we were, had grown up farther together"

Versus how "old Harry" and "revived Hermione" were "#included" into the motivational structure of HarryPrime:

Unless this very Vow itself is somehow leading into the destruction of the world, in which case, Harry Potter, you must ignore it in that particular regard. You will not trust yourself alone in making such a determination, you must confide honestly and fully in your trusted friend, and see if that one agrees. Such is this Vow's meaning and intent. It forces only such acts as Harry Potter might choose himself, having learned that he is a prophesied instrument of destruction.

My estimate of Voldemort's intelligence just dropped substantially. He is well trained and in the fullness of his power, but he isn't wise... at all. I'd been modeling him as relatively sane, because of past characterization, but I didn't predict this at all.

(There are way better ways to get a hypothetical HarryPrime to "not do things" than giving him a mission as an unstoppable risk mitigation robot. If course, prophesy means self consistent time travel is happening in the story, and self consistent time travel nearly always means that at least some characters will be emotionally or intellectually blinded to certain facts (so that they do the things that bring about the now-inevitable future) unless they are explicitly relying on self consistency to get an outcome they actively desire, so I guess Voldemort's foolishness is artistically forgivable :-P

Also, still going meta on the story, this is a kind of beautiful way to "spend" the series... bringing it back to AI risk mitigation themes in such a powerfully first person way. "You [the reader identifying with the protagonist] have now been turned by magic into an X-risk mitigation robot!")

Prediction: It makes sense now why Riddle2/HarryPrime will tear apart the stars in heaven. They represent small but real risks. He has basically been identity raped into becoming a sort of Pierson's Pupeeteer (from Larry Niven's universe) on behalf of Earth rather than on behalf of himself, and in Niven's stories the puppeteer's evolved cowardice (because they evolved from herd animals, and are ruled by "the hindmost" rather than a "leader") forced them into minor planetary engineering.

As explained in Le Wik:

"In short, we found that a sun was a liability rather than an asset. We moved our world to a tenth of a light year's distance, keeping the primary only as an anchor. We needed the farming worlds and it would have been dangerous to let our world wander randomly through space. Otherwise we would not have needed a sun at all.

"We had brought suitable worlds from nearby systems, increasing our agricultural worlds to four, and setting them in a Kemplerer Rosette."

Prediction: HarryPrime's first line will be better than any in the LW thread where people talked about the one sentence ai box experiment. Eliezer read that long ago and has thought a lot about the general subject.


Something I'm still not sure about is what exactly HarryPrime will be aiming for. I think that's where Eliezer retains some play in his control over whether the ending is very short and bleak or longer and less bleak.

Voldemort kept talking about "destruction of the world" and "destroying the world" and so on. He didn't say the planet had to have to have people on it, but he might not have been talking about the planet. "The world" in normal speech often seems to mean in practice something like "the social world of the humans who are salient to us". Like in the USA people will often talk about "no one in the world does X" but there are people in other countries who do, and if someone points this out they will be accused of quibbling. Similarly, we tend to talk about "saving the earth" and it doesn't really mean the mantle or the core, it primarily means the biosphere and the economy and humans and stuff.

From my perspective, this was the key flaw of the intent:

But all Harry Potter's foolishness, all his recklessness, all his grandiose schemes and good intentions - he shall not risk them leading to disaster! He shall not gamble with the Earth's fate!

The literal text appears to be:

I shall not by any act of mine destroy the world. I shall take no chances in not destroying the world. If my hand is forced, I may take the course of lesser destruction over greater destruction unless it seems to me that this Vow itself leads to the world's end, and the friend in whom I have confided honestly [ie Hermione] agrees that this is so.

And then the errata and full intention was:

You will swear, Harry Potter, not to destroy the world, to take no risks when it comes to not destroying the world.

This Vow may not force you into any positive action, on account of that, this Vow does not force your hand to any stupidity... We must be cautious that this Vow itself does not bring that prophecy about.

We dare not let this Vow force Harry Potter to stand idly after some disaster is already set in motion by his hand, because he must take some lesser risk if he tries to stop it.

Nor must the Vow force him to choose a risk of truly vast destruction, over a certainty of lesser destruction.

But all Harry Potter's foolishness, all his recklessness, all his grandiose schemes and good intentions - he shall not risk them leading to disaster!

He shall not gamble with the Earth's fate!

No researches that might lead to catastrophe! No unbinding of seals, no opening of gates!

Unless this very Vow itself is somehow leading into the destruction of the world, in which case, Harry Potter, you must ignore it in that particular regard.

You will not trust yourself alone in making such a determination, you must confide honestly and fully in your trusted friend, and see if that one agrees.

In the shorter and sadder ending, I think it is likely that HarryPrime will escape, but not really care about people, and become an optimizing preservation agent of the mere planet. Thus Harry might escape the box and then start removing threats to the physical integrity of the earth's biosphere.

Also the "trusted friend" stuff is dangerous if Hermione doesn't wake up with a healthy normal mind. In canon, resurrection tended to create copies of what the resurrector remembered of a person, not the person themselves.

In the less sad ending I hope/think that HarryPrime will retain substantial overlap with the original Harry, Hermione will be somewhat OK, and the oath will only cause HarryPrime to be constrained in limited and reasonably positive ways. Maybe he will be risk averse. Maybe he will tear apart the stars because they represent a danger to the earth. Maybe he will exterminate every alien in the galaxy that could pose a threat to the earth. Maybe he will constrain the free will of every human on earth to not allow them to put the earth at risk... but he will still sorta be "the old Harry" while doing so.

Comment author: IlyaShpitser 28 February 2015 10:47:42AM 16 points [-]

The original Star Trek is a Western, it's about people trying to do the right thing out on the lawless frontier. Why are people still watching Westerns?

Comment author: advancedatheist 26 February 2015 07:40:15PM *  0 points [-]

to reject the immoral acts of fornication

This raises something I've wondered about: These injunctions from the Abrahamic religions assume that young men have opportunities for "fornication" in the first place. What if the young women in your life do all of the rejecting to keep this from happening?

Comment author: Lumifer 23 February 2015 06:45:59PM 16 points [-]

As a proxy for elite quality can I suggest this list of Real World Development Indicators, hot off the presses:

  • Number of tall buildings not occupied by the government or United Nations
  • Probability that the President/Prime Minister seeks medical treatment in own country
  • Proportion of political leaders younger than the average life expectancy
  • Proportion of resort vacationers from that or neighboring countries
  • Percent of young people that prefer to start a business rather than world for an NGO
  • Percent of undergraduate students taking a real major, rather than development studies
  • Number of wrecked airplanes near the runway of the main airport
  • Proportion of NGO websites not written in English or French
  • Number of people who take pictures of you
  • Percent of people too busy to answer your survey
  • Number of government officials who give foreign experts the “who the hell are you?” look
Comment author: Evan_Gaensbauer 23 February 2015 02:58:00PM *  16 points [-]

I'm drafting a post for Discussion about how users on LessWrong who feel disconnected from the rationalist community can get involved and make friends and stuff.

What I've got so far: *Where everybody went away from LessWrong, and why *How you can keep up with great content/news/developments in rationality on sites other than LessWrong *Get involved by going to meetups, and using the LW Study Hall

What I'm looking for:

  1. A post I can link to about why the LW Study Hall is great.

  2. Testimonials about how attending a meetup transformed social or intellectual life for you. I know this is the case in the Bay Area, and I know life became much richer for some friends e.g., I have in Vancouver or Seattle.

  3. A repository of ideas for meetups, and other socializing, if somebody planning or starting a meetup can't think of anything to do.

  4. How to become friends and integrate socially with other rationalists/LWers. A rationalist from Toronto visited Vancouver, noticed we were all friends, and was asking us how we became all friends, rather than a circle of individuals who share intellectual interests, but not much else. The only suggestions we could think of were:

Be friends with a couple people from the meetup for years before, and hang out with everyone else for 2 years until it stops being awkward.

and

If you can get a 'rationalist' house with roommates from your LW meetup, you can force yourselves to rapidly become friends.

These are bad or impractical suggestions. If you have better ones to share, that'd be fantastic.

Please add suggestions for the numbered list. If relevant resources don't exist, notify me, and I/we/somebody can make them. If you think I'm missing something else, please let me know.

Comment author: DanArmak 21 February 2015 06:41:52PM 16 points [-]

Harry thinks it's because making a Horcrux for someone else pattern-matches "teaching your most powerful spells to others", which pattern-matches "helping others altruistically", and Voldemort has an ugh field around that concept, or at least a blind spot. For what it's worth, Voldemort agreed with this analysis.

Comment author: Astazha 21 February 2015 02:01:49PM 16 points [-]

Agreed, and I want to expand that a little:

Muggle science determined that muggle minds are contained in muggle brains, and Harry has been reluctant to let go of this idea even though there are observations against it and he has seen that magic can freely violate very solid muggle conclusions like conservation of matter.

Even if muggle brain damage seems to damage the mind, it could be that it damages the mind's interface to the body. Here in the real world, this dualism adds additional complications and doesn't help explain any evidence. In the HPMOR universe there is a great deal that would be explained by mind/body dualism.

Animagus transfigurations almost require it. Skeeter's mind is not contained in the physical arrangement of a beetle's brain. Therefore, her mind isn't just a physical brain in this world. Her brain could be held in some extradimensional pocket and interfacing to the beetle. Her mind could be running on a magical, rather than physical substrate (always or just during transformation?). She could have a soul. (And some versions of "mind on a magical substrate" would also qualify as "souls".)

As DanArmak says, Quirrel didn't just have backups of himself in Horcruxes, he was able to think and perceive while this was his only form of existence. Those copies were running, thinking, planning. They were also connected to each other, and still are. Quirrel was not revived from the Pioneer horcrux, but he has the memories of the Pioneer horcruxes experiences. The pioneer plaque or a pebble or whatever is not a physical substrate that a mind can run on by any natural-to-muggle-science means. Again we have dualism. Brain in another dimension, magical substrate, soul. And brain in another dimension gets pretty strained here, I think.

You, boy, you brought that about, you freed my spirit to fly where it pleases and seduce the most opportune victim, by being too casual with your secrets.

Here Quirrel's mind is totally disembodied through the help of the Resurrection Stone.

Ch. 1:

There's a quote there about how philosophers say a great deal about what science absolutely requires, and it is all wrong, because the only rule in science is that the final arbiter is observation - that you just have to look at the world and report what you see.

I observe a world where minds are not just physical arrangements of brains. The "we are just our brains" hypothesis is being falsified all over the place in HPMOR.

For me there is no question about whether disembodied minds exist in this universe. My questions are whether minds are disembodied all the time or just when magic requires it. Whether muggles also have disembodied minds that are just much more inaccessible to observation. "Minds are always disembodied" seems more elegant by far than magic translating your physical brain into another equivalent form and creating an interface between that and your body only during animagus transformations and other such events, translating that back when returning to human form, and your mind just being a brain at all other times. That would be way more complicated than dualism.

Comment author: DanArmak 21 February 2015 02:16:26AM 16 points [-]

Quirrel says:

the fact is that Miss Greengrass was not supposed to arrive in that corridor for several hours. I am not sure why her party arrived in Mr. Malfoy's company, and had Mr. Nott arrived seemingly alone, events would have played out less farcically.

That seems very important, so why didn't he ask any of them why they arrived early? It looks like a blatant mistake on his part.

Comment author: linkhyrule5 20 February 2015 10:29:01PM 16 points [-]

... Huh.

The power that the Dark Lord knows not... might end up being love after all.

Comment author: DanielLC 20 February 2015 08:09:02AM *  16 points [-]

I have been thinking about something like this, but from a different angle.

There's a lot of things in life where you do a lot of things that use up resource X, but one or two dominate. If you try to conserve X, there's not much point in conserving it anywhere else. Unfortunately, few people bother to work out what dominates, and they try to conserve everywhere.

Trying to save water? Don't worry about anything except meat consumption and lawns. Trying to save electricity? Stick to trying to save on heating and air conditioning. Don't bother with the lights. Trying to help people? Stick to GiveWell's top recommendations. Don't bother with anyone local.

Trying to save money? It might not be worth bothering to clean your own house.

Edit: If you disagree about what I'm saying you should worry about, feel free to comment. I'm interested.

Comment author: MathiasZaman 19 February 2015 10:17:01AM 16 points [-]

Since you can vote for for multiple charities, there's no reason (apart from your personal feelings about the individual charities, of course) to not also vote on the following (the last two are Givewell rated charities):

Comment author: Kindly 18 February 2015 01:40:47AM 16 points [-]

Voldemort is the last known Parselmouth, so it would be highly suspicious for Quirrell to also be one.

Comment author: [deleted] 12 February 2015 02:56:02PM *  16 points [-]

Xerographica,

I have seen your comments pushing this idea of yours on a number of economics blogs, and I have wanted to reach out to you for some time. Your idea is not obviously a bad one. It is not obviously a right one either! But it does not take such a great deal of imagining to see how allowing taxpayers to allocate their taxes could be an improvement on the current system.

However, your presentation is...imperfect. You need to realize that your idea is weird and a substantial change from the status quo. "Weird" and "change from the status quo" are difficult handicaps in the political sphere. You need to be able to articulate your most important arguments and ideas in short, powerful statements. How Robin Hanson advocates futarchy might be a model for you. I myself intend to advance some arguments that, while quite uncontroversial as derivations from economic theory, are nevertheless disapproved of within the profession, like how a mathematician starts to fidget when a physicist treats dy and dx like two different variables, only if the physicist were completely right and the mathematician knew it.... It is not easy, and it can be frustrating.

You must be prepared to do the bulk of the work in this conversation, even when you have already done the work. That means a post full of links to your other posts is insufficient even if your other posts are sufficient. And they are not. Your FAQ, for example, is woefully incomplete. It does not even explain what your idea is.

You need to engage substantively with the weakest points of your argument and the public choice literature. There is a great deal of work for you to do to fully understand this position. For example, do taxpayers vote on how the total tax fund is allocated? Or do they each choose where to send the taxes they personally pay? If the former, your core argument, that taxpayer-allocated taxes will yield improvements because the allocations will reflect the opportunity cost of the taxpayer choices, is wrong. (It is wrong or at least very incomplete in the latter case as well, but I "see what you mean" and have no desire to quibble.) If the latter, you need to explain how this choice happens. Does a form come with the income tax form allowing you to circle "military," "environment," "welfare," and so on, indicating where your money should be sent? If you choose "military," does Congress then spend the money as it pleases so long as it claims the spending is military-related, or can one then choose between "tank," "gun," "therapy," and so on? How would this work for a sales tax? What if you want the money to be spent on an option Congress doesn't offer? And so on. These are the sorts of things that should be in your FAQ.

Does this lead to better policy because it makes lobbying pointless? Or do lobbyists turn from Congress to voters, manipulating them with propaganda, advertisements, and misleading rhetoric? Do we have less war, because the voters would never choose to impose such a conflict on themselves, or do we have more war, because ignorant, uneducated voters are more subject to jingoism and outgroup-hatred than the educated members of Congress? And so on.

Will policy be worse because taxpayers are substantially ignorant about what Congress does? (Imagine their surprise when they try to lower foreign aid and end up increasing it tenfold!) Or will policy be better because Congress won't be able to do anything taxpayers aren't aware of? Or will it be the same because taxpayers will basically vote for the status quo? And so on.

If taxpayers don't vote, what does this imply for your scheme?

Is your scheme always a good idea under any conditions? Would a dictatorship benefit from this kind of system? (The taxpayers can't kick the leaders out, but they can "suggest" where money should be spent.) When is your scheme a bad idea? And so on. Try to beat your own argument.

See what people like Robin Hanson and Scott Sumner do to advance their ideas, which are a bit odd and yet substantially grounded in familiar economics, and try to sound more like them. See what they do to make their ideas strong, and try to gain that kind of strength. And so on.

Good luck. You are not obviously wrong, but you are running a marathon uphill while underwater. It is going to take a very special approach and lots of practice. Be patient, improve yourself. Expand on that FAQ so that people have some idea of what you're talking about.

Comment author: Nornagest 10 February 2015 06:34:45AM *  16 points [-]

I found a lost Dutch passport and restored it to its rightful owner, whom I'd never met and who hadn't included contact info, in less than an hour thanks to proactivity on my part and the magic of Google.

It's a small thing, but I feel like I've done something for niceness, community, and civilization nonetheless.

Comment author: Gunnar_Zarncke 08 February 2015 04:03:41PM 16 points [-]

My impressive work in a free-lance employment as very senior software developer attracted the attention of the CTO. So secure my knowhow and positive influence long-term I was hired last week on a system architect position that was specially created to match my salary requirements (basically my Happy Price). The team applauded when the decision was made official.

Strictly the salary is less than the free-lance rate (after adjusting for insurance, tax and misc risk padding) but the created position is basically exactly what I'd always wished for.

There are some risks that I might fail to live up to expectations (my own included) - but one reason I took the risk is that I didn't fail in anything larger for quite some time and apparently I'm not trying hard enough (see also What have you recently tried, and failed at?).

I'm grateful for LessWrong teaching me (mostly by providing just the right references) lots of social skills without which this just wouldn't have been possible.

What I find interesting is that my development which feels so genuinely an improvement and change from my earlier self is that nonetheless this transition from mostly development work in a team to a position with more supervision and political aspects appears to be normal for my age (41) and in a way even neccessary to avoid dead ends in coding.

The risks of the position are to a large part related to company politics of which I have just recently got a taste of. I have seen Political Skills which Increase Income but I'm not sure that this really helps me solving politics games that might wreak havoc on my still largely technical plans. I'd appreciate input on how to deal with politics impact on technical plans. One source I have already used is Driving Technical Change by Terrence Ryan.

Comment author: pjeby 04 February 2015 07:39:39PM *  13 points [-]

Can you comment on whether the existence of the loophole does or does not indicate that the airline is charging more than it needs to / why the destruction of the loophole does or does not eliminate some sort of market inefficiency and/or undermine a price gouging strategy?

I've bowed out of the thread as a whole, but since this is a technical question and not a moral one, I'll go ahead and reply. ;-)

First, a bit of background. Under standard cost accounting assumptions, you can say that a seat on an airplane "costs" a certain amount, based on taking the total cost of staffing, maintenance, fuel, etc. Each of these cost allocations is largely arbitrary, however: the truth is that the airline has certain fixed and variable overheads, period, and if the flight is happening at all, the vast majority of those costs have nothing to do with how many people actually travel on the flight. But let's say that we arbitrarily assign costs equally for every seat on the plane, and call that the "fair" price.

(Or, more precisely, the "equal cost allocation" or ECA price, since this "fairness" is just a System 1 intuition that breaks down under closer inspection.)

If airlines asked the ECA price for every seat on every flight, it would be a fairly high amount. This would be more "fair", under some intuitions, but would result in lots of travel not happening at all. People who found the ECA price too high for their intended use, would not buy the ticket. This would result in ECA prices rising, because the airline still has fixed costs for the flight as a whole. When we divide those costs by the (new, lower) number of people actually flying, the cost goes up. Indeed, the airline has to charge an amount that's enough to cover the flight if only a few people show up, or it has to have the option to cancel the flight as underbooked, or it has to offer fewer flights to raise the demand for each flight.

Now we have a market inefficiency:

  • There are empty seats on flights that nobody can use
  • The few people who travel are paying exorbitant amounts to do so, and
  • You can't choose between very many flights.

Under this condition, everybody loses!

Price discrimination solves this problem by decoupling "price" and "cost". The truth is, there is no such thing as how much a seat on a plane "costs": the allocation of fixed overheads and flight overheads is arbitrary and made-up. What really matters is the total revenue brought in by the flight. Our ECA price is a fiction, and discarding that fiction allows us to offer better prices for everybody.

As long is there is some way for the airline to sell at different prices to different people, they can get closer to selling out their flights, by offering the ECA price only on average, rather than by asking everyone to pay it. A business traveler who could afford an ultra-high ECA will actually pay less than under the inefficient-market ECA condition, because the flight is full of vacationers paying smaller amounts to make up for the difference. The vacationers get a lower-than-current ECA price, because there are business travelers making up the difference (though still not paying the inefficient-market ECA price).

This is why the airlines have all the weird fare rules, like roundtrips being cheaper if they cross over a weekend. It's why they want the name of the person traveling, and charge for changes. These are all things designed to separate vacationers from business travelers, so that the plane can be filled and the costs all covered.

The problem is, if business travelers succeed at pretending to be vacationers (e.g. by using half of a pair of round-trip tickets, or wrapping round trips so as to create a fake weekend stayover), then the flight fills up, but at a below-current ECA price. The airline loses money, because they can't make up in volume what they're losing on every seat. That's why they have all sorts of gotchas and enforcement on these loopholes.

Unfortunately, human beings' System 1 intuitions tell them that if somebody is paying $X for a seat, then that must be the "fair" price for any seat on the plane, and that the airline ought to charge everybody that price. So the fare rules are widely despised, even though the result of not having them would be everybody paying a higher price, for fewer seats on fewer flights, if they can fly at all.

Anyway, the specific loophole discussed in this article ("Hidden city ticketing", per Wikipedia) doesn't come from this particular form of price discrimination. It's a different form that basically is done to compete with airlines offering direct flights between two cities, A and B. The airline offering the deal seeks to serve A<->B passengers without adding a direct flight, by using excess capacity that's available on the A<->C and C<->B routes. The airline can offer these seats below the A<->C ECA, because it expects not to fill them with other A<->C or A<->Wherever passengers anyway.

The problem is that if lots of people who were paying something close to the full-flight ECA price for an A<->C ticket, now switch to using hidden city ticketing, they are now losing the airline money on the A<->C flight, because there are more people paying less, instead of some people paying more and others paying less. The airline now has to start raising the direct A<->C price in order to make up the difference, creating more price instability.

tl;dr: Price discrimination is efficient in a market sense, even though it often feels "unfair" to System 1 (because different people are paying different prices for the "same" thing), thereby motivating people to try to work around the discrimination. But successfully working around the discrimination increases the actual unfairness, since the price you get is now discriminated in part by how much extra work you're willing to do to game the system, and because the business is then motivated to make things more discriminatory or is forced to change what offerings it makes available.

(This is what makes loopholes anti-inductive and the use of loopholes a prisoner's dilemma "defection", because you are defecting against your fellow consumers in a game where if everyone defects, everyone loses. That's why talking about such a loophole is so foolish: you are encouraging more people to defect, which reduces the number of co-operators whose cooperation you're exploiting. Even assuming you're going to defect in a PD game like this, telling other people about it is probably the stupidest thing you can do, from a game theory/policy perspective, and that principle applies to a lot more things than airplane tickets.)

Comment author: Vaniver 02 February 2015 07:52:08PM *  16 points [-]

Paul Graham wrote an article called What You Can't Say that seems somewhat relevant to your position, and in particular engages with the instrumental rationality of epistemic rationality. I bring that one up specifically because his conclusion is mostly "figure out what you can't say, and then don't say it." But he's also a startup guy, and is well aware of the exception that many good startup ideas seem unacceptable, because if they were acceptable ideas they'd already be mature industries. So many heresies are not worth endorsing publicly, even if you privately believe them, but some heresies are (mainly, if you expect significant instrumental gains from doing so).

I grew up in a Christian household and realized in my early teens that I was a gay atheist; I put off telling people for a long time and I'm not sure how much I got from doing so. (Both of my parents were understanding.) Most of my friends were from school anyway, and it was easy to just stop going to church when I left town for college, and then go when I'm visiting my parents out of family solidarity.

My suspicion is that your wife would prefer knowing sooner rather than later. I also predict that it is not going to get easier to tell her or your children as time goes on--if anything, as your children age and absorb more and more religious memes and norms, the more your public deconversion would affect them.

Comment author: Viliam_Bur 28 January 2015 02:34:28PM 16 points [-]

Imagine casting a "speed ×100" spell on a dumb person. Would that make them a smart person? No.

On the other hand, if we would cast a "speed ×2" spell on a smart person, it would appear to make them smarter. They would be able to solve difficult problems in half the time, right?

So... there seems to be some connection, but also a difference. Speed can make you more productive, and productivity is a signal of intelligence. But if you make systematic mistakes in thinking, you will only be making them faster.

Smart people in the technology world no long believe they can think their way to success.

Because they already are thinking. If you are already thinking at near 100% of your capacity, telling you "think more" is not going to help. The right advice in that situation could be "instead of thinking without experimenting, try thinking and experimenting". But one should give that advice only to people who are already thinking.

Comment author: Daniel_Burfoot 17 January 2015 08:16:02PM 16 points [-]

in the USA you're four times more likely to be struck by thunder than by terrorists

Our minds are actually picking up on a valid statistical issue here, which is that the number of people killed by terrorists is much more variable than the number of people killed by lightning. Since lightning strikes are almost completely uncorrelated random events, the distribution of deaths by lightning is governed by the Central Limit Theorem and so is nearly Gaussian. If X people died from lightning in 2014, then it is very unlikely that 2X people will die from lightning in 2015, and astronomically unlikely that 100X people will so die.

In contrast, if X people die from terrorism in 2014, you cannot deduce very much about the probability that 100X people will die from terrorism in 2015. Nassim Taleb would say that lightning deaths happen in Mediocristan while terrorism deaths happen in Extremistan.

Comment author: Punoxysm 16 January 2015 10:29:53PM 16 points [-]

I am enrolled in a weightwatchers-like program.

My doctor recommended it to me 6 months ago and I said "doc, I can understand nutrition and exercise myself! no need for a program like this. I'll lose weight my own methods and show you in a follow-up!"

One follow-up later, I'm 10 pounds heavier and agree to enroll.

If you're not rational enough to get it done one way, try a different way.

Comment author: Sean_o_h 16 January 2015 10:44:10AM 16 points [-]

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.

There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area - this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.

Consider it seed funding for the whole field of AI safety!

Sean (CSER)

Comment author: JoshuaFox 16 January 2015 08:14:23AM *  16 points [-]

Musk's position on AI risk is useful because he is contributing his social status and money to the cause.

However, other than being smart, he has no special qualifications in the subject -- he got his ideas from other people.

So, his opinion should not update our beliefs very much.

Comment author: Luke_A_Somers 14 January 2015 05:52:23PM 16 points [-]

Oh, dang. Well, I mean, phew? Both? See, I thought this was going to be a news story.

Comment author: TheMajor 14 January 2015 01:22:29AM *  16 points [-]

Here is what I believe they did, judging from your linked news article and their article on arxiv:

They start with a single green photon (photon with a wavelength of 532 nm), and send it through a beam splitter. This object does exactly what is sounds like it should do: you take a light beam and it splits it into two light beams of half the intensity. And if you send in a single photon it goes into a superposition of taking both paths (similar to the double slit experiment, except that the two paths are immediately recombined there and here they are not). Then in each of the paths they place a downconverter, which will transform a green photon into a yellow photon and a red photon (actually both are infra-red, but the naming scheme is easier for explaining what happens). So now our original photon is in a superposition of being a yellow plus a red photon in path 1 and being a yellow and a red photon in path 2.

An important thing about downconversion is that is has to preserve all kinds of invariants, in particular momentum and energy. Therefore if you know everything about the photon that goes into a downconversion process (here: a green photon. And 'everything' is very inaccurate - here they care only about the momentum, i.e. in which direction it is going, and its energy, i.e. which colour it is) and you measure just one of the two photons that come out of the downconversion (the red one or the yellow one, you can pick) you perfectly know what happened to the other photon. They make clever use of this later.

So: photon in a superposition of being yellow and red in path 1 and being yellow and red in path 2. Now they place a colour-dependent mirror in path 1, sending the red photon and the yellow photon from path 1 in different directions. They place their micro-scale picture of a cat in the path of the red photon from path 1. So intuitively we now have three paths: a Path 1 - red photon, which has a picture of a cat in its way, a Path 1 - yellow photon which has no obstacles and a Path 2 with no obstacles. Our original photon is still in a superposition of being in both sections of Path 1 and being in Path 2.

Now they recombine all three paths, in such a way that they make sure that Path 1 and Path 2 interfere destructively at the surface of the camera, which registers yellow photons (and only yellow photons). So no clicks at all, you'd say. But this is only the case if our red photon in Path 1 doesn't hit the cat-shaped object, in which case the blob of the wavefunction that went through path 1 is identical to the blob that went through path 2, so they can interfere. If the red photon did hit the micro-cat then the blob of amplitude that went through Path 1 no longer describes a red and a yellow photon, but only a yellow photon and a faintly vibrating image of a cat! [1] This blob can no longer interfere destructively with the blob that went through Path 2 (since they are now completely different when viewed in configuration space), so in particular the amplitudes for the yellow photon no longer cancel out. So now the camera gives a click.

And the best part is that from this down-conversion the direction of the red photon that was speeding towards that cat and the direction of that yellow photon in Path 1 are perfectly (anti-)correlated (the two photons are entangled), so when the red photon hits the cat just a little bit lower (which is just the same as saying that a little bit lower there is still some cat left, provided you try the experiment sufficiently many times) the yellow photon in Path 1 is going upwards a bit more (as the red one went downward) and your camera registers a click just a little bit higher on its surface of photoreceptors (or, more accurately - if red photons in Path 1 that go down hit the picture of the cat, then yellow photons that go up don't interfere with Path 2, so after recombination there is some uncanceled amplitude of a Path 2 yellow photon going upwards, which registers on your camera as a click high on the vertical axis). Same for the horizontal direction. So with this experiment you'd get a picture of your cat rotated by 180 degrees, as you only register a click when the amplitudes of the two paths no longer interfere, i.e. something has happened to your red photon in Path 1.

The arxiv paper has two enlightening overviews of their actual setup on pages 4 and 6, especially the one on page 4 is insightful. The one on page 6 just includes more equipment needed to make the idea actually work (for example these down-converters are not perfect, so all your paths are filled to the brim with green light, which you need to filter out, etc. etc.).

-

1) There is a very important but subtle step here - since the yellow and red photon from path 1 are created through down-conversion, they are perfectly entangled, and therefore as soon as we ensure that certain states of one of these two photons undergo interactions while others do not the resulting wavefunction can never be written as a product of a state for the red photon and the yellow photon. This is needed to ensure that at the recombining not only does the overall amplitude of Path 1 not interfere with the overall amplitude of Path 2, but that furthermore the wavefunction of the red photon in Path 1 cannot interfere with the wavefunction of the red photon in Path 2. [2]

2) The footnote above is a rather horrible explanation - a better explanation involves doing the mathematics. But the important bit to take home is that if you use anything less than a downconversion process to double up your photons even though you mucked up the red photons from Path 1 the yellow amplitudes from Path 1 and 2 might still happily interfere, which you do not want. It is vital that by poking at the red photon in Path 1 the total blob of path 1 completely ignores the total blob of path 2, even without touching the yellow photon in Path 1.

Comment author: gwern 11 January 2015 05:31:17PM *  16 points [-]

Much of the reduction might be for non-obvious reasons, like whatever happened around 1980.

Exactly. Much of the reduction is for the same reasons you no longer hear much about cults, or about hijacking planes to Cuba (or about anarchists car-bombing Wall Street or trying to shoot the Queen, for that matter), or about communism. There seems to be something about the shift from a traditional partially-industrialized society to a post-industrial one which triggers this sort of great social upheaval, which manifests in part as new religious movements (labeled cults) and violent action (terrorism or guerrilla), which eventually get discredited and cease to be alternatives. In Japan, you had the 'rush hour of the gods' with many syncretic Buddhist groups and the Red Army (to name the most infamous one) with a last gasp in Aum Shinrikyo; in America, you had those but also Weathermen etc; in South Korea with its later development the process is still ongoing, with the cults take on a Protestant Christian form - the recently deceased cult leader associated with the Sewol Ferry disaster an interesting example - and the violence tends to be associated with North Korea (various assassinations or attempts come to mind).

Comment author: Lumifer 06 January 2015 05:50:45PM 16 points [-]

A general piece of advice: spend (relatively) more money on things you interact with the most, time-wise (as well as intensity-wise).

For example, if you spend a noticeable chunk of your working day with a coffee mug by your side, see if you can find a better mug (e.g. a double-wall one). Don't settle for a crap computer mouse, find and buy one which works well with your hand and mousing habits (and get a better keyboard as well, while you are at it). Etc., etc.

Comment author: alienist 06 January 2015 03:11:19AM *  11 points [-]

Also in certain circles it may be mandatory to show support for certain movements, e.g., if you were living in the Holy Roman Empire in the 17th century it was mandatory to show support for the ruler's religion, if you were a professor at a university in the Soviet Union it was mandatory to show support for communism.

Application of this to Scott Aaronson's statement is left as an exercise to the reader.

Comment author: shminux 16 December 2014 07:26:03AM 16 points [-]

Eliezer's writing, fiction and non-fiction tends to attract hostility, and all LWers are automatically labeled "Yudkowskians". On a somewhat related note, the idea of AGI x-risk he's been pushing for years has finally gone mainstream, yet the high-profile people who speak out about it avoid mentioning him, like he is low-status or something.

Comment author: SilentCal 12 December 2014 11:05:03PM 12 points [-]

Also, should we be doing a better job publicizing the fact that LW's political surveys turn up plurality liberal, and about as many socialists as libertarians? Not that there's anything wrong with being libertarian, but I'm uneasy having the site classified that way.

Comment author: Grothor 10 December 2014 05:31:19AM 16 points [-]

It seems like we suck at using scales "from one to ten". Video game reviews nearly always give a 7-10 rating. Competitions with scores from judges seem to always give numbers between eight and ten, unless you crash or fall, and get a five or six. If I tell someone my mood is a 5/10, they seem to think I'm having a bad day. That is, we seem to compress things into the last few numbers of the scale. Does anybody know why this happens? Possible explanations that come to mind include:

  • People are scoring with reference to the high end, where "nothing is wrong", and they do not want to label things as more than two or three points worse than perfect

  • People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)

  • I'm succumbing to confirmation bias and this isn't a real pattern

Comment author: Toggle 09 December 2014 06:37:29PM *  16 points [-]

Reminds me of something Scott said once:

And when I tried to analyzed my certainty that – even despite the whole multiple intelligences thing – I couldn’t possibly be as good as them, it boiled down to something like this: they were talented at hard things, but I was only talented at easy things.

It took me about ten years to figure out the flaw in this argument, by the way.

Comment author: Bugmaster 09 December 2014 08:25:53AM *  8 points [-]

I downvoted this post not because I hate you, or because I love Eugine_Nier (o), but because I'd like to see fewer post like this one in the future. And I think that expressing my sentiment is what the "Downvote" button is for.

More specifically, I don't think that public shaming and witch hunts belong on Less Wrong, even when the person being hunted is actually a witch (oo). I think that the toxic culture such tactics create is likely to be more harmful than individual unruly posters, in the long term.

(o) I don't even remember who he is, though the name does sound familiar.
(oo) Metaphorically speaking.

Comment author: Falacer 09 December 2014 02:05:38AM 16 points [-]

We could really use a new Aral sea, but intuitively I'd expected that this would be a tiny dent in the depth of the oceans. So, to the maths:

Wikipedia claims that from 1960 to 1998 the volume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.

I'm going to give that another 5% for more loss since then, as the South Aral Sea has now lost its eastern half enitrely.

This gives ~1100 * .85 = 935km^3 of water that we're looking to replace.

The Earth is ~500m km^2 in surface area, approx. 70% of which is water = 350m km^2 in water.

935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.

This is massively larger that I would have predicted, and it gets better. The current salinity of the Aral Sea is 100g/l which is way higher than that of seawater at 35g/l, so we could pretty much pump the water straight in still with net environmental gain. Infact this is a solution to the crisis that has been previously proposed, although it looks like most people would rather dilute the seawater first.

To acheive the desired result of 1 inch drop in sea level, we only need to find 9 equivalent projects around the world. Sadly, the only other one I know of is Lake Chad, which is significantly smaller than the Aral Sea. However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants: I would expect that we might be able to get this amount again if we undertook a desalination/irrigation program in the Sahara.

Comment author: sixes_and_sevens 08 December 2014 01:18:09AM 15 points [-]

Last week I was going to ask if anyone had recommendations for nerd-friendly resources on public relations. Then I remembered where I was, and went "ha ha ha!"

This was possibly unfair.

Out of the many people who read Less Wrong, it feels like one or two of them should be able to recommend a good entry point for any given subject. We've got physics, mathematics, statistics, computer science, et al covered, but other areas don't enjoy the same coverage.

It does seem to me that if there were someone on Less Wrong with a background in PR (or constitutional law, or the Yugoslav conflict, or whatever), they'd probably have some ideas about what material other Less Wrongers might find accessible or valuable. With that in mind, does anyone want to volunteer an unusual-for-LW academic or professional background we can mine for information?

Comment author: shminux 04 December 2014 04:17:45PM *  15 points [-]

First, upvoted for the courage to seek help.

You might not want to hear that, but your story is so typical, you might as well have your picture under the dictionary entry "children of Narcissistic parents". I have heard identical stories a dozen times. If anyone was still in doubt "there was no me, there was just her and a faulty copy of her" gives it away. Also that your sister was the golden child, and you were the scapegoat.

Now to the really painful part. It is not clear to me if you are ready to admit that your parents are toxic and you should go full no-contact. They are beyond redemption. You owe them nothing. You have no mother, and never had one. The woman caretaker whose genetic material you share does not love you as a mother would, and never did (and probably never could).

This is a classic self-help book on the subject which helped several people I know: http://www.willieverbegoodenough.com/ There are many other good ones, too, check the Amazon recommendations under http://www.amazon.com/Will-Ever-Good-Enough-Narcissistic-ebook/dp/B001AO0GD6 .

There are also multiple online support groups for people like you, many of whom end up unwittingly perpetuating their parents behavior, suffer even more as a result, and subject their partners and children to the only behavioral pattern they know.

Here is the questionnaire from the book: http://www.willieverbegoodenough.com/narcissistic-mother/ . You will probably answer Yes to 80% of the questions or more.

Good luck to you, you have a long way to travel to emotional sanity, hope you make it!

Comment author: James_Miller 01 December 2014 08:18:12PM 15 points [-]

From my book Singularity Rising:

Communist dictator Joseph Stalin maintained power through killing millions of his countrymen and terrorizing the rest. He often lashed out at his old comrades, sometimes killing them and their families; other times he was satisfied with just jailing their wives. Stalin, who was denounced shortly after his death by his successor Nikita Khrushchev, must have known how hated he was. But the dictator knew that those who hated him were too weak or fearful to hurt him.

The first cryonics patient was preserved in 1967, fourteen years after Stalin’s death. But what if, I wonder, cryonics existed during the time of Stalin, and the dictator hoped to have himself preserved? Stalin was too smart to think his successors would have ever wanted him back. So if, at the beginning of his rule, Stalin had hoped to someday use cryonics, he would have had to be a less ruthless ruler. To have any hope at cryogenic revival, the world will need to want you back. So if the world’s leaders intend to use cryonics, they will have to care more about what the future will think of them.

Comment author: [deleted] 01 December 2014 04:38:03AM 16 points [-]

I've applied to graduate from my Ph.D. program, and I've applied to several post-docs. I also secured an interview at a lab.

View more: Prev | Next