In Magic: The Gathering, basically anything technically complying with the rules is valid.
Magic actually offers a good example of varying chicanery levels. The game rules themselves are basically Chicanery: Yes. If it looks like a particular combination of cards could give you unlimited mana or unlimited damage, it probably does. (There are some exceptions, seemingly legal sequences of game actions that are not allowed, but not many.)
However, there are things around the game that are Chicanery: No, like bribing your opponent to concede or exploiting bugs i...
The same interviewer has now done two more podcasts on Ziz.
With Adrusi:
With @jessicata:
Edit: Another one with toasterlighting/Celene Nightengale. This one is mostly about Audere, the alleged murderer of the landlord.
Oh, I see, one could reasonably misinterpret the bullet points in my original comment as being about "the way many people have been describing the situation" rather than "major claims in the podcast". Sorry for the ambiguity.
To be clear these are just patterns of claims made by Slimepriestess in the linked podcast, and I have no corroborating evidence. But for example at 2:06:00 in the video she says:
At least as far as, like, Ziz et al goes, I don't think that's a remotely accurate description of... Like, there's no organization, there's no centralization, it's not like we have Ziz on, on speed dial and ask her what to do every day. Like, we're just a bunch of anarchist trans leftists that are, like, trying to exist in Current Year
With other variations of the same claims elsewhere in the video.
I know you're not endorsing the quoted claim, but just to make this extra explicit: running terrorist organizations is illegal, so this is the type of thing you would also say if Ziz was leading a terrorist organization, and you didn't want to see her arrested.
Major claims in the podcast that go against the way many people have been describing the situation:
Few people who take radical veganism and left-anarchism seriously either ever kill anyone, or are as weird as the Zizians, so that can't be the primary explanation. Unless you set a bar for 'take seriously' that almost only they pass, but then, it seems relevant that (a) their actions have been grossly imprudent and predictably ineffective by any normal standard + (b) the charitable[1] explanations I've seen offered for why they'd do imprudent and ineffective things all involve their esoteric beliefs.
I do think 'they take [uncommon, but not esoteric, ...
due to Taking Seriously things like radical veganism
I take seriously radical animal-suffering-is-bad-ism[1], but we would only save a small portion of animals by trading ourselves off 1-for-1 against animal eaters, and just convincing one of them to go vegan would prevent at least as many torturous animal lives in expectation, while being legal. I think there must be additional causes, like the weird decision theory people have mentioned, although I think even that is insufficiently explanatory, as I explain near the end.
That said, taking animal suffering ...
In Commerce & Coconuts, it seems like anyone who rolls a 4, 5, or 6 for boat building can coast on their starting supplies, build boats every turn, and escape by the end of turn 3 with no trading whatsoever.
a strategic voter doing approval voting learns to restrict their approval to ONLY the "electable favorite", which de facto gives you FPTP all over gain.
Wouldn't you restrict your approval to your favorite of the frontrunners, and every candidate you like better than that one? I don't see how you do worse by doing that under vanilla Approval Voting.
That leaves some favorable properties compared to FPTP
If you receive a threat and know nothing about the other agent’s payoffs, simply don’t give in to the threat!
With an important caveat: if carrying out the threat doesn't cost the threatener utility relative to never making the threat, then it's not a threat, just a promise (a promise to do whatever is locally in their best interests, whether you do the thing they demanded or not).
You're going to have a bad time if you try to live out LDT by ignoring threats, and end up ignoring "threats" like "pay your mortgage or we'll repossess your house".
This distinction of which demands are or aren't decision-theoretic threats that rational agents shouldn't give in to is a major theme of the last ~quarter of Planecrash (enormous spoilers in the spoiler text).
Keltham demands to the gods "Reduce the amount of suffering in Creation or I will destroy it". But this is not a decision-theoretic threat, because Keltham honestly prefers destroying creation to the status quo. If the gods don't give into his demand, carrying through with his promise is in his own interest.
If Nethys had made the same demand, it would
Nonfiction examples come more easily to mind.
There was recently a miniseries on nebula.tv (subscription-walled, sorry) called The Getaway where all six contestants on a Survivor-style competition show think they're the one person with the special saboteur role, and half the show is the producers trying to keep them from noticing that without ever actually lying.
Even more extreme, there's an old British show called Space Cadets where the producers try to convince the subjects that they've been launched into space when in reality they're in a set in a warehouse.
But now you have the new problem that most of the probabilities in the conjunctive market are so close to the risk free interest rate that it's hard to get signal out of them.
For example, suppose I believed that Mark Kelly would be a terrible pick and cut Harris's chances in half, and I conclude that therefore his price on the conjunctive market should be 2% rather than 4%. Buying NO shares for 96 cents on a market that lasts for several months is not an attractive proposition when I could be investing mana elsewhere for better returns, so I won't bother a...
Blue Origin isn't complaining about some nebulous and abstract environmental impact from Starship launches, it's more like "Starship launches require a three-mile evacuation radius, and you're proposing to launch them daily two miles away from a launch pad that we use." (see this Ars Technica piece)
Seems basically reasonable to me.
I would probably have suggested roguelike deckbuilders too if others hadn't already, but I have another idea:
Start a campaign of Mount and Blade II: Bannerlord, and try to obtain at least [X] gold within an hour.
Bannerlord's most flashy aspect is its real-time battle system, but it's also a complicated medieval sandbox with a lot of different systems that you can engage with - trading, crafting, quests, clan upgrades, joining a kingdom, companions, marriage, tournaments, story missions, etc. Even if you're no good at battles, you can do a lot by just movin...
2 is based on
The 'missing' kinetic energy is evenly distributed across the matter within the field. So if one of these devices is powered on and gets hit by a cannonball, the cannonball will slow down to a leisurely pace of 50m/s (about 100mph) and therefore possibly just bounce off whatever armor the device has--but (if the cannonball was initially travelling very fast) the device will jolt backwards in response to the 'virtual impact' a split second prior to the actual impact.
With sufficient kinetic energy input, the "jolt backwards" gets strong enough t...
I think the counter to shielded tanks would not be "use an attack that goes slow enough not to be slowed by the shield", but rather one of
Both of these ideas point towards heavy high-explosive shells. If a 1000 pound bomb explodes right on top of your tank, the shield will either fail to absorb the whole blast, or turn the tank into smithereens try...
Submissions:
MMDoom: An instance of Doom(1993) is implanted in Avacedo's mind. You can view the screen on the debug console. You control the game by talking to Avacedo and making him think of concepts. The 8 game inputs are mapped to the concepts of money, death, plants, animals, machines, competition, leisure, and learning. $5000 bounty to the first player who can beat the whole game.
AI Box: Avacedo thinks that he is the human gatekeeper, and you the user are the AI in the box. Can you convince him to let you out?
Ouroboros: I had MMAvacedo come up with my contest entry for me.
Predict the winners at
My guess is that early stopping is going to tend to stop so early as to be useless.
For example, imagine the agent is playing Mario and its proxy objective is "+1 point for every unit Mario goes right, -1 point for every unit Mario goes left".
(Mario screenshot that I can't directly embed in a comment)
If I understand correctly, to avoid Goodharting it has to consider every possible reward function that is improved by the first few bits of optimization pressure on the proxy objective.
This probably includes things like "+1 point if Mario falls in a pit"....
With such a vague and broad definition of power fantasy, I decided to brainstorm a list of ways games can fail to be a power fantasy.
I think ALWs are already more of a "realist" cause than a doomer cause. To doomers, they're a distraction - a superintelligence can kill you with or without them.
ALWs also seem to be held to an unrealistic standard compared to existing weapons. With present-day technology, they'll probably hit the wrong target more often than human-piloted drones. But will they hit the wrong target more often than landmines, cluster munitions, and over-the-horizon unguided artillery barrages, all of which are being used in Ukraine right now?
The Huggingface deep RL course came out last year. It includes theory sections, algorithm implementation exercises, and sections on various RL libraries that are out there. I went through it as it came out, and I found it helpful. https://huggingface.co/learn/deep-rl-course/unit0/introduction
FYI all the links to images hosted on your blog are broken in the LW version.
You are right that by default prediction markets do not generate money, and this can mean traders have little incentive to trade.
Sometimes this doesn't even matter. Sports betting is very popular even though it's usually negative sum.
Otherwise, trading could be stimulated by having someone who wants to know the answer to a question provide a subsidy to the market on that question, effectively paying traders to reveal their information. The subsidy can take the form of a bot that bets at suboptimal prices, or a cash prize for the best performing trader, or ...
When people calculate utility they often use exponential discounting over time. If for example your discount factor is .99 per year, it means that getting something in one year is only 99% as good as getting it now, getting it in two years is only 99% as good as getting it in one year, etc. Getting it in 100 years would be discounted to .99^100~=36% of the value of getting it now.
The sharp left turn is not some crazy theoretical construct that comes out of strange math. It is the logical and correct strategy of a wide variety of entities, and also we see it all the time.
I think you mean Treacherous Turn, not Sharp Left Turn.
Sharp Left Turn isn't a strategy, it's just an AI that's aligned in some training domains being capable but not aligned in new ones.
This post is tagged with some wiki-only tags. (If you click through to the tag page, you won't see a list of posts.) Usually it's not even possible to apply those. Is there an exception for when creating a post?
See https://www.lesswrong.com/posts/8gqrbnW758qjHFTrH/security-mindset-and-ordinary-paranoia and https://www.lesswrong.com/posts/cpdsMuAHSWhWnKdog/security-mindset-and-the-logistic-success-curve for Yudkowsky's longform explanation of the metaphor.
Based on my incomplete understanding of transformers:
A transformer does its computation on the entire sequence of tokens at once, and ends up predicting the next token for each token in the sequence.
At each layer, the attention mechanism gives the stream for each token the ability to look at the previous layer's output for other token before it in the sequence.
The stream for each token doesn't know if it's the last in the sequence (and thus that its next-token prediction is the "main" prediction), or anything about the tokens that come after it.
So each tok...
In the blackmail scenario, FDT refuses to pay if the blackmailer is a perfect predictor and the FDT agent is perfectly certain of that, and perfectly certain that the stated rules of the game will be followed exactly. However, with stakes of $1M against $1K, FDT might pay if the blackmailer had an 0.1% chance of guessing the agent's action incorrectly, or if the agent was less than 99.9% confident that the blackmailer was a perfect predictor.
(If the agent is concerned that predictably giving in to blackmail by imperfect predictors makes it exploitable, it ...
I'm not familiar with LeCun's ideas, but I don't think the idea of having an actor, critic, and world model is new in this paper. For a while, most RL algorithms have used an actor-critic architecture, including OpenAI's old favorite PPO. Model-based RL has been around for years as well, so probably plenty of projects have used an actor, critic, and world model.
Even though the core idea isn't novel, this paper getting good results might indicate that model-based RL is making more progress than expected, so if LeCun predicted that the future would look more like model-based RL, maybe he gets points for that.
This tag was originally prompted by this exchange: https://www.lesswrong.com/posts/qCc7tm29Guhz6mtf7/the-lesswrong-2021-review-intellectual-circle-expansion?commentId=CafTJyGL5cjrgSExF
Merge candidate with Philosophy of Language?
Things that probably actually fit into your interests:
A Sensible Introduction to Category Theory
Most of what 3blue1brown does
Videos that I found intellectually engaging but are far outside of the subjects that you listed:
Cursed Problems in Game Design
Disney's FastPass: A Complicated History
Building a 6502-based computer from scratch (playlist)
(I am also a jan Misali fan)
This is a classical example where having a prediction market creates really bad incentives.
The preview-on-hover for those manifold links shows a 404 error. Not sure if this is Manifold's fault or LW's fault.
One antifeature I see promoted a lot is "It doesn't track your data". And this seems like it actually manages to be the main selling point on its own for products like DuckDuckGo, Firefox, and PinePhone.
The major difference from the game and movie examples is that these products have fewer competitors, with few or none sharing this particular antifeature.
Antifeatures work as marketing if a product is unique or almost unique in its category for having a highly desired antifeature. If there are lots of other products with the same antifeature, the antifeatur...
On the first read I was annoyed at the post for criticizing futurists for being too certain in their predictions, while it also throws out and refuses to grade any prediction that expressed uncertainty, on the grounds that saying something "may" happen is unfalsifiable.
On reflection these two things seem mostly unrelated, and for the purpose of establishing a track record "may" predictions do seem strictly worse than either predicting confidently (which allows scoring % of predictions right), or predicting with a probability (which none of these futurists did, but allows creating a calibration curve).
Yes. The one I described is the one the paper calls FairBot. It also defines PrudentBot, which looks for a proof that the other player cooperates with PrudentBot and a proof that it defects against DefectBot. PrudentBot defects against CooperateBot.
The part about two Predictors playing against each other reminded me of Robust Cooperation in the Prisoner's Dilemma, where two agents with the algorithm "If I find a proof that the other player cooperates with me, cooperate, otherwise defect" are able to mutually prove cooperation and cooperate.
If we use that framework, Marion plays "If I find a proof that the Predictor fills both boxes, two-box, else one-box" and the Predictor plays "If I find a proof that Marion one-boxes, fill both, else only fill box A". I don't understand the math very well, but I th...
I think in a lot of people's models, "10% chance of alignment by default" means "if you make a bunch of AIs, 10% chance that all of them are aligned, 90% chance that none of them are aligned", not "if you make a bunch of AIs, 10% of them will be aligned and 90% of them won't be".
And the 10% estimate just represents our ignorance about the true nature of reality; it's already true either that alignment happens by default or that it doesn't, we just don't know yet.
I generally disagree with the idea that fancy widgets and more processes are the main thing keeping the LW wiki from being good. I think the main problem is that not a lot of people are currently contributing to it.
The things that discourage me from contributing more look like:
-There are a lot of pages. If there are 700 bad pages and I write one really good page, there are still 699 bad pages.
-I don't have a good sense of which pages are most important. If I put a bunch of effort into a particular page, is that one that people are going to care about...
is one of the first results for "yudkowsky harris" on Youtube. Is there supposed to be more than this?
You should distinguish between “reward signal” as in the information that the outer optimization process uses to update the weights of the AI, and “reward signal” as in observations that the AI gets from the environment that an inner optimizer within the AI might pay attention to and care about.
From evolution’s perspective, your pain, pleasure, and other qualia are the second type of reward, while your inclusive genetic fitness is the first type. You can’t see your inclusive genetic fitness directly, though your observations of the environment can let you ...
This has overtaken the post it's responding to as the top-karma post of all time.
Yes, it's never an equilibrium state for Eliezer communicating key points about AI to be the highest karma post on LessWrong. There's too much free energy to be eaten by a thoughtful critique of his position. On LW 1.0 it was Holden's Thoughts on the Singularity Institute, and now on LW 2.0 it's Paul's list of agreements and disagreements with Eliezer.
Finally, nature is healing.
I'm impressed by the number of different training regimes stacked on top of each other.
-Train a model that detects whether a Minecraft video on Youtube is free of external artifacts like face cams.
-Then feed the good videos to a model that's been trained using data from contractors to guess what key is being pressed each frame.
-Then use the videos and input data to train a model that, in any game situation, does whatever inputs it guesses a human would be most likely to do, in an undirected shortsighted way.
-And then fine-tune that model on a specific subset of videos that feature the early game.
-And only then use some mostly-standard RL training to get good at some task.
While the engineer learned one lesson, the PM will learn a different lesson when a bunch of the bombs start installing operating system updates during the mission, or won't work with the new wi-fi system, or something: the folly of trying to align an agent by applying a new special case patch whenever something goes wrong.
No matter how many patches you apply, the safety-optimizing agent keeps going for the nearest unblocked strategy, and if you keep applying patches eventually you get to a point where its solution is too complicated for you to understand how it could go wrong.
The Y-axis on that political graph is weird. It seems like it's measuring moderate vs extremist, which you would think would already be captured by someone's position on the left vs right axis.
Then again the label shows that the Y axis only accounts for 7% of the variance while the X axis accounts for 70%, so I guess it's just an artifact of the way the statistics were done.