As of June 2024, I have signed no contracts or agreements whose existence I cannot mention.
French, but because my teacher tried to teach all of the days of the week at the same time, they still give me trouble.
They're named as the planets: Sun-day, Moon-day, Mars-day, Mercury-day, Jupiter-day, Venus-day, and Saturn-day.
It's easy to remember when you realize that the English names are just the equivalent Norse gods: Saturday, Sunday and Monday are obvious. Tyr's-day (god of combat, like Mars), Odin's-day (eloquent traveler god, like Mercury), Thor's-day (god of thunder and lightning, like Jupiter), and Freyja's-day (goddess of love, like Venus) are how we get the names Tuesday, Wednesday, Thursday, and Friday.
Why is Google the biggest search engine even though it wasn't the first? It's because Google has a better signal-to-noise ratio than most search engines. PageRank cut through all the affiliate cruft when other search engines couldn't, and they've only continued to refine their algorithms.
But still, haven't you noticed that when Wikipedia comes up in a Google search, you click that first? Even when it's not the top result? I do. Sometimes it's not even the article I'm after, but its external links. And then I think to myself, "Why didn't I just search Wikipedia in the first place?". Why do we do that? Because we expect to find what we're looking for there. We've learned from experience that Wikipedia has a better signal-to-noise ratio than a Google search.
If LessWrong and Wikipedia came up in the first page of a Google search, I'd click LessWrong first. Wouldn't you? Not from any sense of community obligation (I'm a lurker), but because I expect a higher probability of good information here. LessWrong has a better signal-to-noise ratio than Wikipedia.
LessWrong doesn't specialize in recipes or maps. Likewise, there's a lot you can find through Google that's not on Wikipedia (and good luck finding it if Google can't!), but we still choose Wikipedia over Google's top hit when available. What is on LessWrong is insightful, especially in normally noisy areas of inquiry.
Yes.
Hissp v0.5.0 is up.
python -m pip install hissp
If you always wanted to learn about Lisp macros, but only know Python, try the Hissp macro tutorials.
That seems to be getting into Game Theory territory. One can model agents (players) with different strategies, even suboptimal ones. A lot of the insight from Game Theory isn't just about how to play a better strategy, but how changing the rules affects the game.
Not sure I understand what you mean by that. The Universe seems to follow relatively simple deterministic laws. That doesn't mean you can use quantum field theory to predict the weather. But chaotic systems can be modeled as statistical ensembles. Temperature is a meaningful measurement even if we can't calculate the motion of all the individual gas molecules.
If you're referring to human irrationality in particular, we can study cognitive bias, which is how human reasoning diverges from that of idealized agents in certain systematic ways. This is a topic of interest at both the individual level of psychology, and at the level of statistical ensembles in economics.
It's short for "woo-woo", a derogatory term skeptics use for magical thinking.
I think the word originates as onomatopoeia from the haunting woo-woo Theremin sounds played in black-and-white horror films when the ghost was about to appear. It's what the "supernatural" sounds like, I guess.
It's not about the belief being unconventional as much as it being irrational. Just because we don't understand how something works doesn't mean it doesn't work (it just probably doesn't), but we can still call your reasons for thinking so invalid. A classic skeptic might dismiss anything associated categorically, but rationalists judge by the preponderance of the evidence. Some superstitions are valid. Prescientific cultures may still have learned true things, even if they can't express them well to outsiders.
Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions. You could call this the Ozymandias approach.
ChaosGPT already exists. It's incompetent to the point of being comical at the moment, but maybe more powerful analogues will appear and wreak havoc. Considering the current prevalence of malware, it might be more surprising if something like this didn't happen.
We've already seen developments that could have been considered AI "warning shots" in the past. So far, they haven't been enough to stop capabilities advancement. Why would the next one be any different? We're already living in a world with literal wars killing people right now, and crazy terrorists with various ideologies. It's surprising what people get used to. How bad would a warning shot have to be to shock the world into action given that background noise? Or would we be desensitized by then by the smaller warning shots leading up to it? Boiling the frog, so to speak. I honestly don't know. And by the time a warning shot gets that bad, can we act in time to survive the next one?
Intentionally causing earlier warning shots would be evil, illegal, destructive, and undignified. Even "purely" economic damage at sufficient scale is going to literally kill people. Our best chance is civilization stepping up and coordinating. That means regulations and treaties, and only then the threat of violence to enforce the laws and impose the global consensus on any remaining rogue nations. That looks like the police and the army, not terrorists and hackers.
I feel like this has come up before, but I'm not finding the post. You don't need the stick-on mirrors to eliminate the blind spot. I don't know why pointing side mirrors straight back is still so popular, but that's not the only way it's taught. I have since learned to set mine much wider.
This article explains the technique. (See the video.)
In a nutshell, while in the diver's seat, tilt your head to the left until it's almost touching your window, then from that perspective point it straight back so you can just see the side of your car. (You might need a similar adjustment for the passenger's side, but those are often already wide-angle.) Now from normal position, you can see your former "blind spot". When you need to see straight back in your side mirror (like when backing out), just tilt your head again. Remember that you also have a center mirror. You should be able to see passing cars in your center mirror, and then in your side mirror, then in your peripheral vision without ever turning your head or completely losing sight of them.