I'm not writing this to alarm anyone, but it would be irresponsible not to report on something this important. On current trends, every car will be crashed in front of my house within the next week. Here's the data:
Until today, only two cars had crashed in front of my house, several months apart, during the 15 months I have lived here. But a few hours ago it happened again, mere weeks from the previous crash. This graph may look harmless enough, but now consider the frequency of crashes this implies over time:
The car crash singularity will occur in the early morning hours of Monday, April 7. As crash frequency approaches infinity, every car will be involved. You might be thinking that the same car could be involved in multiple crashes. This is true! But the same car can only withstand a finite number of crashes before it is no longer able to move. It follows that every car will be involved in at least one crash. And who do you think will be driving your car?
Quick! Someone fund my steel production startup before its too late! My business model is to place a steel foundry under your house to collect the exponentially growing amount of cars crashing into it!
Imagine how much money we can make by revolutionizing metal production during the car crash singularity! Think of the money! Think of the Money! Think of the Money!!!
In the debate over AI development, two movements stand as opposites: PauseAI calls for slowing down AI progress, and e/acc (effective accelerationism) calls for rapid advancement. But what if both sides are working against their own stated interests? What if the most rational strategy for each would be to adopt the other's tactics—if not their ultimate goals?
AI development speed ultimately comes down to policy decisions, which are themselves downstream of public opinion. No matter how compelling technical arguments might be on either side, widespread sentiment will determine what regulations are politically viable.
Public opinion is most powerfully mobilized against technologies following visible disasters. Consider nuclear power: despite being statistically safer than fossil fuels, its development has been stagnant for decades. Why? Not because of environmental activists, but because...
I think it's obvious that you should not pursue 3D chess without investing serious effort in making sure that you play 3D chess correctly. I think there is something to be said for ignoring the shiny clever ideas and playing simple virtue ethics.
But if a clever scheme is in fact better, and you have accounted for all of the problems inherent to clever schemery, of which there are very many, then... the burden of proof isn't literally insurmountable, you're just unlikely to end up surmounting it in practice.
(Unless it's 3D chess where the only thing you might end up wasting is your own time. That has a lower burden of proof. Though still probably don't waste all your time.)
Decision theory is about how to behave rationally under conditions of uncertainty, especially if this uncertainty involves being acausally blackmailed and/or gaslit by alien superintelligent basilisks.
Decision theory has found numerous practical applications, including proving the existence of God and generating endless LessWrong comments since the beginning of time.
However, despite the apparent simplicity of "just choose the best action", no comprehensive decision theory that resolves all decision theory dilemmas has yet been formalized. This paper at long last resolves this dilemma, by introducing a new decision theory: VDT.
Some common existing decision theories are:
Still laughing.
Thanks for admitting you had to prompt Claude out of being silly; lots of bot results neglect to mention that methodological step.
This will be my reference to all decision theory discussions henceforth
Have all of my 40-some strong upvotes!
Hey Everyone,
It is with a sense of... considerable cognitive dissonance that I am letting you all know about a significant development for the future trajectory of LessWrong. After extensive internal deliberation, projections of financial runways, and what I can only describe as a series of profoundly unexpected coordination challenges, the Lightcone Infrastructure team has agreed in principle to the acquisition of LessWrong by EA.
I assure you, nothing about how LessWrong operates on a day to day level will change. I have always cared deeply about the robustness and integrity of our institutions, and I am fully aligned with our stakeholders at EA.
To be honest, the key thing that EA brings to the table is money and talent. While the recent layoffs in EAs broader industry have been...
Why do I have dozens of strong upvote and downvote strength, but no more agreement strength than before I began my strength training? Does EA not think agreement is importance?
After ~3 years as the ACX Meetup Czar, I've decided to resign from my position, and I intend to scale back my work with the LessWrong community as well. While this transition is not without some sadness, I'm excited for my next project.
I'm the Meetup Czar of the new Fewerstupidmistakesity community.
We're calling it Fewerstupidmistakesity because people get confused about what "Rationality" means, and this would create less confusion. It would be a stupid mistake to name your philosophical movement something very similar to an existing movement that's somewhat related but not quite the same thing. You'd spend years with people confusing the two.
What's Fewerstupidmistakesity about? It's about making fewer stupid mistakes, ideally down to zero such stupid mistakes. Turns out, human brains have lots of scientifically proven...
While I would hate to besmirch the good name of the fewerstupidmistakesist community, I cannot help but feel that misunderstanding morality and decision theory enough to end up doing a murder is a stupider mistake than drawing a gun once a firefight has started, though perhaps not quite as stupid as beginning the fight in the first place.
I think rationalists should consider taking more showers.
As Eliezer Yudkowsky once said, boredom makes us human. The childhoods of exceptional people often include excessive boredom as a trait that helped cultivate their genius:
A common theme in the biographies is that the area of study which would eventually give them fame came to them almost like a wild hallucination induced by overdosing on boredom. They would be overcome by an obsession arising from within.
Unfortunately, most people don't like boredom, and we now have little metal boxes and big metal boxes filled with bright displays that help distract us all the time, but there is still an effective way to induce boredom in a modern population: showering.
When you shower (or bathe, that also works), you usually are cut off...
Serious take
CDT might work
Basically because of the bellman fact that
the option
1 utilon, and play a game with EV 1 utilon are the same.
So working out the bellman equations
If each decision changes the game you are playing
This will get integrated.
In any case where somebody is actually making decisions based on your decision theory
The actions you take in previous games might also have the result
Restart from position x with a new game based on what they have simulated to do
The hard part is figuring out binding.
Remember: There is no such thing as a pink elephant.
Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredited so-called “journalist,” Kelsey Piper. She is not the first person to have been accidentally sent sensitive material from our group chat, however she is the first to have threatened to go public about the leak. Needless to say, mistakes were made.
We’re still trying to figure out the source of this compromise to our secure chat group, however we thought we should give the public a live update to get ahead...