This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them. 

Rules for this thread:

  1. Each crazy idea goes into its own top level comment and may be commented there.
  2. Voting should be based primarily on how original the idea is.
  3. Meta discussion of the thread should go to the top level comment intended for that purpose.
New Comment
39 comments, sorted by Click to highlight new comments since: Today at 12:53 PM

Circus arts should be a required subject in school. That way, people will be able to get attention without shooting anyone.

Insofar as attention is zero sum, making circus arts mandatory would not make those that are at risk of commiting violence more able to get attention. For that to work, you would have to encourage the relatively violent-prone to learn circus arts and discourage the relatively non-violent-prone.

Attention is not zero sum. I could be giving a lot more attention than I am.

But EDIT: I would not spend significantly more attention on someone who was performing circus arts.

I don't think the kind of attention those people need is about being on stage. Human to human connection is likely more valuable.

I know several teachers of circus arts in schools. It replaces sport classes. Helps teach balance, confidence, hobbies, skills, and fun.

Just ask the offending schools if they will consider it; or ask a circus teacher if they will approach the offending school.

offending == school that has taken your fancy to try to improve.

Less ambitious, but music and art were required at my school. Not much, just one performance to the class and one public piece were required. I don't know how to check if mass shooters were deprived of other forms of public expression.

I'd guess three things are true of a stable community:

First, 95% of everything that can loosely be referred to as "drama" comes from a semi-consistent 5% of a given group. Second, all major conflict involves at least one of this 5%, and the majority is composed entirely of members of this 5%. Third, eliminating a substantial portion of this 5% results in rapid evaporation of the group, and destabilization.

Some number of the 5% are transgressors; they're part of the dramatic group because they cross boundaries and bother people to an excessive degree. Some number of the 5% are warriors; they're part of the dramatic group because they react to those who cross boundaries and bother people (sometimes on behalf of others, sometimes on behalf of themselves). (These people are important to the community, but very bad moderators.) Some number of the 5% are diplomats; they get involved to try to reach a compromise to end the drama as rapidly as possible, getting entangled in the drama; because of their neutrality, attacking them just isn't done, and anybody insane enough to do so gets banned. (Good moderators, very rare.)

People can be different parts of these three classifications to different groups. One internal group's transgressor is another group's police. Diplomats tend to be defines across one-or-more groups, and I'd guess tend to be close to universally acknowledged as such. Within a given group perspective, however:

Eliminating transgressors wholesale (or never having them) causes the warriors adjust their Overton window on what is acceptable behavior, and start targeting lesser transgressions, until the community eventually destabilizes into constant fights about what constitutes a transgression, provoking evaporative cooling. Eliminating police wholesale (or never having them) causes non-transgressors to leave, as nobody is around to oppose the worst transgressions and provide a sense of healthy community, provoking evaporative cooling. Eliminating diplomats (or never having them) eliminates a necessary cap on escalation, and conflicts escalate until everybody not involved in it gets fed up and leaves.

People can be different parts of these three classifications to different groups.

I would add also that one entity (human) can be different roles on different days; at different times; on different topics; or over time.

Also suggestion adjust the numbers (as they were made up) to the pareto principle

The Pareto Principle is probably the proportion of lurkers to participants; the number should really be 96-4, but 95-5 is a nice, round number, and has a certain... statistical flavor.

[-][anonymous]8y00

Oh man, I really really like this idea.

I am trying to think of a way to study stable communities that would allow us to make predictions and define these archetypes in explicit, quantifiable terms.

Here is a second Simulation Trilemma.

If we are living in a simulation, at least one of the following is true:

1) we are running on a computer with unbounded computational resources, or

2) we will not launch more than one simulation similar to our world, or

3) the simulation we are in will terminate shortly after we launch our own simulations.

Here 'short' is on the order of the period between the era we start the simulation at and when the simulation reaches our stage.

The computer could just halve our clock speed every time we launch a new simulation. No matter how many simulations we launch, our clock speed never reaches zero, so everything continues as normal inside our simulation. Problem solved! Suggested reading: "Hotel Infinity" followed by "Permutation City".

If you wanted to launch a higher order of infinity number of ssimulation from inside our simulation, that would be another story...

That's the unbounded computation case.

It seems like there is a lot of room between "one simulation" and "unbounded computational resources". Also, it is a bit odd to think that when computational resources start running low the correct thing to do is wipe everything clean... that is an extremely primitive response, and one that suggests that our simulation was pretty close to worthless (at least at the end of its run). It also assumes a full-word simulation, and not just a preferred-actors simulation, which is a possibility, and maybe a probability, but not a given.

It seems like there is a lot of room between "one simulation" and "unbounded computational resources"

Well the point is that if we are running on bounded resources, then the time until it runs out depends very sensitively on how many simulations we (and simulations like us) launch on average. Say that our simulation has a million years allocated to it, and we launch simulations starting a year back from the time when we launch a simulation.

If we don't launch any, we get a million years.

If we launch one, but that one doesn't launch any, we get half a million.

If we launch one, and that one launches one etc, then we get on the order of a thousand years.

If we launch two, and that one launches two etc, then we get on the order of 20 years.

Also, it is a bit odd to think that when computational resources start running low the correct thing to do is wipe everything clean.

True, 'terminates' is probably the wrong word. There's no reason why the simulation would be wiped. It just couldn't continue.

It also assumes a full-word simulation, and not just a preferred-actors simulation, which is a possibility, and maybe a probability, but not a given

I'm not sure. I think the trilemma applies to a simulation of a single actor, if that actor decides to launch simulations of their own life.

True, 'terminates' is probably the wrong word. There's no reason why the simulation would be wiped. It just couldn't continue.

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.

I think the trilemma applies to a simulation of a single actor, if that actor decides to launch simulations of their own life.

The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.

You're right - branch (2) should be "we don't keep running run more than one". We can launch as many as we like.

The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).

That would buy you some time. If a single-agent simulation is say 10^60 times cheaper than a whole universe (roughly the number of elementary particles in the observable universe ?), then that gives you about 200 doubling generations before those single-agent simulations cost as much as much as a universe.

Unless the space of all practically different possible lives of the agent is actually much smaller ... maybe your choices don't matter that much and you end up playing out a relatively small number or attractor scripts. You might be able to map out that space efficiently with some clever dynamic programming.

That would buy you some time.

My thought was that if a simulation that centered around a single individual had a simulation running within it, the simulation would only need to be convincing enough to appear real to that one person. Even if the nested simulation runs a third level simulation within it, or if the one individual runs two simulations, aren't you still basically exploring the idea space of that one individual? That is, me running a simulation and experiencing it through virtual reality is limited in cognitive/sensory scope and fidelity to the qualia that I can experience and the mental processes that I can cope with... which may still be very impressive from my point of view, but the computational power required to present the simulation can't be much more complex than the computational power required to render my brain states in the base simulation. I may simulate a universe with very different rules, but these rules are by definition consistent with a full rendering of my concept space; I may experience new sensory inputs (if I use VR), but I won't be experiencing new senses.... and what I experience through VR replaces, rather than adds to, what I would have experienced in the base simulation.

Even in the worst case scenario that I build 1000+ simulations, they only have to run for the time that I check on them. The more time I spend programming them and checking that they are rendering what they should, the less time I have to do additional simulations. This seems at worst an arithmetic progression.

Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person's local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory, and the fact that if it worked, I would never know, I'm not sure that that is a significant risk.

Oh, I think I see what you mean. No matter how many or how detailed the simulations you run, if your purpose is to learn something from watching them, then ultimately you are limited by your own ability to observe and process what you see.

Whoever is simulating you only has to run the simulations that you launch to the level of fidelity such that you can't tell if they've taken shortcuts. The deeper the nested simulation people are, the harder it is for you to pay attention to them all, and the coarser their simulations can be.

If you are running simulations to answer psychological questions, that should work. And if you are running simulations to answer physics questions ... why would you fill them with conscious people ?

Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person's local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory

I was going to say that if you want to be a pain you could launch some NP hard problems that you can manually verify solutions to with a pencil and paper ... except your simulators control your random-number generators.

Exactly - and you expressed it better than I could.

Why would us launching a simulation use more processing power? It seems more likely that the universe does a set amount of information processing and all we are doing is manipulating that in constructive ways. Running a computer doesn't process more information than the wind blowing against a tree does; in fact, it processes far less.

[-][anonymous]8y10

Should ministries of education be designing trial curriculums to be about the solution rather than problem focussed by basing them around asking smart questions and interpreting expert judgement instead of providing unasked answers as lay non expert teachers? At higher levels education, students could be taught to survey special topics of research techniques. Yeah?

One objection I anticipate is that professionals are needed to practice in certain fields with commoditised bodies of knowledge. I reckon commodified knowledge is an asset that can be privately leased or bought into. It might downsize educationally oversized professions like mental health professionals.

After all, EA ventures demonstrates that it can use a very simple expert prediction method in order to fund the complex EA space. It looks like contemporary work in the field is as close to the formal study of 'rationality' as one gets.

System I and System II seem to be very unwidely terms because of the numbers. People like Gleb try to find alternatives to use to reach "the masses". There a common tradition of expressing new concepts with Greek and Latin roots. Can anybody think of good names for System I and System II based on Greek or Latin?

I did like Auto-pilot for System I. However it's relevance is diminishing as our understanding of (like flying a plane) changes with new technologies being more able to automatic for us. but maybe that's a good thing? (also AutoConscious)

that leaves an opposite to Auto for System 2. Pro-Conscious. Deliberate-Conscious active-conscious... Manual-conscious (Auto/manual as it relates to cars/driving)

If System 1 is Autopilot, System 2 is Override.

[-][anonymous]8y00

There's already several pop-psychology terms that try to point to these two concepts albeit imperfectly:

Left Brain vs. Right Brain Lizard Brain vs. Mammal Brain Instinctual vs. Deliberate Subconscious vs. Conscious Thin slicing vs. planning

Subconscious and conscious cognition?

Subconscious and conscious are words that already have meaning assigned to them. That meaning differes from what System I and System II means. The point of calling them System I and System II was to not let people think of them in the existing notions but accept them as new concepts.

Fubar might not be too far off; close to what you are looking for is 'Conscious competence' and 'Unconscious competence', the final two stages of the Conscious Competence model.

close to what you are looking for is 'Conscious competence' and 'Unconscious competence', the final two stages of the Conscious Competence model.

That model does exist but the four stage model isn't about what System I and System II are describing. It's worth to have new words for new concepts to not get the concepts confused with the old meaning.

[-][anonymous]8y00

But as soon as you try to get these words into popular perception their meaning will shift to the simplest meaning regardless.

There's nothing inherently complex about the meaning of the terms System I and System II. They are just as simply as other binary distinction. The difference is that they slice reality elsewhere. The concepts in common usage aren't in common usage because they are simple.

New terms can teach people to slice reality in new ways.

Prehensile tails for cats! I think cats would enjoy them.

Real world crazy idea: New improved feline overlords

A timeless interpretation of Quantum Immortality means that my choices are guaranteed to lead to immortality, and I'll probably get there through a series of totally normal-seeming events. Like orbital corrections, early and slow changes require less effort than late and fast changes. I'm more likely a not-too-unlikely immortal consciousness, and the not-too-unlikely path I experience is a world where I live forever without getting improbably lucky.

Furthermore, generalizing timeless Quantum Immortality across multiple Universes, I was probably created by an entity that creates immortal consciousnesses at a rate that's increasing nearly as rapidly as possible. Thus, there's a good chance that I too will spawn huge numbers of conscious immortal beings during my infinitely-long life, if I haven't already.

The thing that is nice about this kind of wishful thinking is that as long as whenever you are sick or apparently dying, you simply assume that you are going to get better, everything that you will ever observe will be consistent with this and may even support it, as for example when you see other people dying in a car accident but you improbably manage to survive. You will never (in terms of your observations) be the one who gets killed in that accident.

Don't go this route. Quantum immortality requires that you survive and are conscious. It doesn't require that you are healthy. Since accidents have a chance of crippling you, quantum immortality indicates that every person is going to find themselves crippled, forever (or at least until something is invented that can un-cripple them, whereupon they live a normal life until one of the remaining things that can cripple them but hasn't been stopped yet does so.)

I agree. I did say that it was wishful thinking. But it is still true that your observations will always be consistent with it, even assuming it is false.

I was just listening to a piece about brain damage from sub-concussive impacts from sports, even in high school. Considering that participation in such sports seems to go over well on resumes, is it possible that countries which value such sports (I'm thinking especially of the US) might have significantly worse leadership (both business and government) as a result?

Doubtful. My intuitive model is that those who do play sport and receive brain damage are less likely to end up in places of risk. Where enough people play sport and don't get head damage and do get jobs in high-powered places.

While there would be some; I imagine not many people fall into the problem category. If anything - the things that competitive sport teaches about competition and teamwork as well as "trying hard" would probably outweigh a bit of brain damage. while "not having brain damage" is obviously better than "having brain damage", maybe the trade off can be useful? (is useful - given that people still value sport in this way)