Ever had a minor insight, but didn't feel confident or interested enough to write a top-quality LW post? Personally, the high standards of LW and my own uncertainty have kept me from ever bothering to present a number of ideas.

Here, you can post quick little (preferably under 100 words) insights, with the explicit understanding that the idea is unpolished and unvetted, and hey, maybe nobody will be interested, but it's worth a shot.

Voters and commenters are invited to be especially charitable, given that we already know that these ideas are less-than-perfect in concept or presentation or ______, and we want people to be able to present their ideas quickly so that they can be understood, rather than having to craft them to be defensible.

If a crucial flaw is found in an insight, or if the idea is unoriginal, please point it out.

Alternatively, if enough interest is shown, the idea can be expanded on in a Discussion post, and you are invited to include in your post a disclaimer that, even if this idea is not of interest to everybody, some people have shown interest over in the Unpolished Insights Thread, and this post is targeted towards interested people like them.

Hopefully, this format lets us get the best of both worlds, easily filtering out unneeded content without anybody feeling punished, and calling the best ideas out into the light, without scaring anybody off with high standards.

 

(Post your ideas separately, so that they can be voted for or commented on separately)

New Comment
25 comments, sorted by Click to highlight new comments since:

I had an idea to increase average people’s rationality with 4 qualities:

It doesn’t seem/feel “rationalist” or “nerdy.”

It can work without people understanding why it works

It can be taught without understanding its purpose

It can be perceived as about politeness

A high school class, where people try to pass Intellectual Turing Tests. It's not POLITE/sophisticated to assert opinions if you can't show that you understand the people that you're saying are wrong.

We already have a lot of error-detection abilities when our minds criticize others' ideas, we just need to access that power for our own ideas.

[-][anonymous]40

[META] General +1 for taking initiative and giving us something to iterate on, wrt to the idea of sourcing insights in short form.

I'm not sure to what extent you want people to criticize ideas in this thread, and I'm going to test the waters. Give me feedback on how well this matches the norms you envision.

An immediate flaw comes to mind, that any elaboration of this idea should respond to: Changing the high school curriculum is very difficult. If you've acquired the social capital to change the curriculum of a high school, you should not spend it by making such a small, marginal contribution, but rather you could probably find something with a larger effect with the same social capital.

This is an interesting idea, although I'm not sure what you mean by

It can work without people understanding why it works

Shouldn't the people learning it understand it? It doesn't really seem much like learning otherwise.

You don't have to understand what it does or why it works (or care about those) to successfully perform it. You can put yourself in the other side's shoes without understanding the effects of doing so.

This is about 100 words, in case you want to get a feel for the length

On the Value of Pretending

Actors don't break down the individual muscle movements that go into expression; musicians don't break down the physical properties of the notes or series of notes that produce expression.

They both simulate feeling to express it. They pretend to feel it. If we want to harness confidence, amiability, and energy, maybe there's some value in pretending and simulating (what would "nice person" do?).

Cognitive Behavioral Therapy teaches that our self-talk strongly affects us, counseling us not to say "Oh, I suck" kind of things. Positive self-talk "I can do this" may be worth practicing.

I'm not sure why, but this feels not irrational, but highly not-"rational" (against the culture associated with "rationality."). This also intrigues me...

In this vein, I have had some good results from the simple expedient of internally-saying "I want to do this" instead of "I have to do this" with regards to things that system 2 wants to do (when system 1 feels reluctant), i.e. akratic things. I have heard this reframing suggested before but I feel like I get benefit from actually thinking the "I want" verbally.

The leader of North Korea apparently used a VX nerve agent to kill his half-brother in a Malaysian airport, thus loudly signaling that he is an unhinged sociopath with WMDs. I think a first strike attack on North Korea might be justified.

I read somewhere NK is collapsing, according to a top-level defector. Maybe it's best to wait things out.

an unhinged sociopath with WMDs

It's not like this is news.

I think a first strike attack on North Korea might be justified.

You go ahead, I'll hold your beer.

[-]Elo30

Humbly submit my last stub list that was received relatively poorly.

http://lesswrong.com/lw/nwi/a_collection_of_stubs/

Low-quality thought-vomiting, eh?

I'll try to keep it civil. I get the feeling the site is as far removed from the site's founding goals and members as a way to striate the site's current readership. Either pay into a training seminar through one of the institutions advertised above, or be left behind to bicker over minutia in an underinformed fashion. That said, nobody can doubt the usefulness in personal study, though it is slow and unguided.

I'm suspicious, of the current motives here, of the atmosphere this site provides. I guess it can't be helped since MIRI and CFAR are at the mercy of needing revenue just like any other institution. So where does one draw the line between helpful guidance and malevolent exploitation?

Can you please clarify whose motives you're talking about, and generally be a lot more specific with your criticisms? Websites don't have motives. CFAR and MIRI don't run this website although of course they have influence. (In point of fact I think it would be more realistic to say nobody runs this website, in the sense that it is largely in 'maintenance mode' and administrator changes/interventions tend to be very minimal and occasional.)

[-][anonymous]20

I think that what you say is true, although I'm unsure that the dichotomy you provide is correct.

Personally, I see great value in a Schelling point that tried to advance rationality. I don't think the current LW structure is optimal, and I also agree that there's not enough structure to help people learning ease into these ideas, or provide avenues of exploration.

I also don't think that CFAR/MIRI have been heavily using LW as a place for advertisement, outside of their fundraising goals, but I've also not been here too long to really say. Feel free to correct me with more evidence.

Towards the end of improving materials on rationality, I've been thinking about what a collective attempt to provide a more practical sequel to the Sequences might look like. CFAR's curriculum feels like it still only captures a small swath of all of rationality space. I'm thinking something like a more systematic long-form attempt to teach skills, where we could source quick feedback from people on this site.

A charity is a business who sells feeling good about yourself and the admiration of others as its products.

To make a lucrative product, don't ask "what needs need filling," ask "what would help people signal more effectively."

Your claim seems to factor into two parts: "There exist charities that are just selling signaling", and "All charities are that kind of charity." The first part seems obviously true; the second seems equally obviously false.

Some things that I would expect from a charity that was just selling signaling:

  • Trademarking or branding. It would need to make it easy for people to identify (and praise) its donors/customers, and resist imitators. (Example: the Komen breast-cancer folks, who have threatened lawsuits over other charities' use of the color pink and the word "cure".)
  • Association with generic "admiration" traits, such as celebrity, athleticism, or attractiveness. (Example: the Komen breast-cancer folks again.)
  • Absence of "weird" or costly traits that would correlate with honest interest in its area of concern. (For instance, a pure-signaling charity that was ostensibly about blindness might not bother to have a web site that was highly accessible to blind users.)
  • In extreme cases, we would be hearing from ostensible beneficiaries of the charity telling us that it actually hurts, excludes, or frightens them. (Example: Autism Speaks.)
  • Jealousy or competitiveness. It would try to exclude other charities from its area of concern. (A low-signaling charity doesn't care if it is responsible for fixing the thing; it just wants the thing fixed.)

Regarding instrumental rationality: I've been wondering for a while now if "world domination" (or "world optimization", as HJPEV prefers) is feasible. I haven't entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one's approximate chances of success with various possible paths to achieving one's values. I have therefore come up with the "feasibility problem." Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one's values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.

Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I'm talking about the hoomun world, not physics and chemistry).

I've even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you've completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.

This is probably a bit longer than 100 words, but I'm posting it here and not in the open thread because I have no idea if it's of any value whatsoever.

Am I reading this right as, basically, crack the alignment problem manually, and then finish science (then proceed to take over the world)?

[-]Elo00

can you do me a favour and separate this into paragraphs, (or fix the formatting).

Thanks.

The lesswrong slack has a channel called #world_domination.

Fixed the formatting.

While this idea may not be of interest to everybody, it has already been vetted by the Open Thread

So, basically a brainstorming thread?

[-]sdr00

FAI value alignment research, and cryonics are mutually inconsistent stances. Cryo resurrection will almost definitely happen by scanning & whole-brain-emulation. An EM/upload with a subjective timeline sped up to 1000x will be indistinguishable from an UFAI. Incremental value alignment results of today will be applied to your EM tomorrow.

For example, how would you feel with all your brilliant intellect, with all your inner motivational spark being looped into a rat race against 10000 copies of yours, performing work for & grounded to a baseline, where if you don't win against your own selves, all your current thoughts, and feeling, and emotions are to be permanently destroyed?

The not-"rational" (read "not central to the rationalist concept cluster in the mind/not part of the culture of rationalists"), but rational things we need to do.

The value of pretending, self-talk, I mention in another comment. The value of being nice is another not strongly associated with "rationalism," but which is, I think, rational to recognize.

There are others. Certain kinds of communication. Why can't any "rationalists" talk? The best ones are so wrapped up in things that betray their nerd-culture association that they are only appealing to other nerds; you can practically identify people who aren't "rationalists" by checking if they sound nerdy or not. There's probably a place for sounding a lot more like Steve Harvey or a pastor or politician if there's any place for effectively communicating with people who aren't nerds.

There are other anti-rationalist-culture things we should probably look for and develop