To get the virtue of the Void you need to turn off the gacha game and go touch grass. If you fail to achieve that, it is futile to protest that you acted with propriety.
The grass that can be touched is not the true grass.
Even if you accept that insects have value, helping insects right now is still quite questionable because it's a form of charity with zero long-term knock-on effects.
...to my confusion, not only do both of those look fine to me on mobile, the original post now also looks fine.
(Yes, I am on Android.)
On mobile I see no paragraph breaks, on PC I see them.
Edited to add what it looks like on mobile:
If there's less abuse happening in homeschooling than in regular schooling, a policy of "let's impose burdens on homeschooling to crack down on abuse in homeschooling" without a similar crackdown on abuse in non-home-schooling does not decrease abuse.
You can see something similar with self-driving cars. It is bad if a self-driving car crashes. It would be good to do things that reduce that. But if you get to a point where self-driving cars are safer than regular driving, and you continue to crack down on self-driving cars but not on regular driving, this is not good for safety overall.
Apropos of nothing in particular, do you think that abolishing the Dept. of Education would make things go better or worse?
Buying time for technical progress in alignment...to be made where, and by who?
Any of the many nonprofits, academic research groups, or alignment teams within AI labs. You don't have to bet on a specific research group to decide that it's worth betting on the ecosystem as a whole.
There's also a sizeable contingent that thinks none of the current work is promising, and that therefore buying a little time is value mainly insofar as it opens the possibility of buying a lot of time. Under this perspective, that still bottoms out in technical research progress eventually, even if, in the most pessimistic case, that progress has to route through future researchers who are cognitively enhanced.
Mostly fair, but tiers did have a slight other impact in that they were used to bias the final room: Clay Golem and Hag were equally more-likely to be in the final room, both less so than Dragon and Steel Golem but more so than Orcs and Boulder Trap.
Yes, that's a sneaky part of the scenario. In general, I think this is a realistic thing to occur: 'other intelligent people optimizing around this data' is one of the things that causes the most complicated things to happen in real-world data as well.
Christian Z R had a very good comment on this, where they mentioned looking at the subset of dungeons where Rooms 2 and 4 had the same encounter, or where Rooms 6 and 8 had the same encounter, to factor out the impact of intelligence and guarantee 'they will encounter this specific thing'.
(Edited to add...
I think puzzling out the premise could have been a lot more fun if we hadn't known the entry and exit squares going in
I think this would have messed up the difficulty curve a bit: telling players 'here is the entrance and exit' is part of what lets 'stick a tough encounter at the entrance/exit' be a simple strategy.
The writing was as fun and funny as usual - if not more so! - but seemed less . . . pointed?/ambitious?/thematically-coherent? than I've come to expect.
This is absolutely true though I'm surprised it's obvious: my originally-planned scenario did...
Here’s a third paper, showing that sports betting increases domestic violence. When the home team suffers an upset loss while sports betting is legal, domestic violence that day goes up by 9% for the day, with lingering effects. It is estimated 10 million Americans are victims of domestic violence each year.
I was suspicious of the methodology here (e.g. the difference between 'when the home team loses violence goes up by 9% if and only if gambling is legalized' and 'when the home team loses violence goes up by 10% if gambling is not legalized but by ...
The dungeon is laid out as depicted; Room 3 does not border Room 4, and does border Room 6. You don't, however, know what exactly the adventurers are going to do in your dungeon, or which encounters they are going to do in which order. Perhaps you could figure that out from the dataset.
(I've edited the doc to make this clearer).
I think you may have mixed up the ordering halfway through the example: in the first and third tables 'Emma and you' is $90 while 'Emma and Liam'is $30, but in the second it's the other way around, and some of the charts seem odd as a result?
I don't think you should feel bad about that! This scenario was pretty complicated and difficult, and even if you didn't solve it I think "tried to solve it but didn't quite manage it" is more impressive than "didn't try at all"!
There is a problem I want solved.
No-one, anywhere in the world, has solved it for me.
Therefore, Silicon Valley specifically is bad.
Were whichever markets you're looking at open at this time? Most stuff doesn't trade that much out of hours.
I think this is just an unavoidable consequence of the bonus objective being outside-the-box in some sense: any remotely-real world is much more complicated than the dataset can ever be.
If you were making this decision at a D&D table, you might want to ask the GM:
ETA: I have finally tracked down the trivial coding error that ended up distorting my model: I accidentally used kRace in a few places where I should have used kClass while calculating simon's values for Speed and Strength.
Thanks for looking into that: I spent most of the week being very confused about what was happening there but not able to say anything.
Yeah, my recent experience with trying out LLMs has not filled me with confidence.
In my case the correct solution to my problem (how to use kerberos credentials to authenticate a database connection using a certain library) was literally 'do nothing, the library will find a correctly-initialized krb file on its own as long as you don't tell it to use a different authentication approach'. Sadly, AI advice kept inventing ways for me to pass in the path of the krb file, none of which worked.
I'm hopeful that they'll get better going forward, but right now they are a substantial drawback rather than a useful tool.
Ah, sorry to hear that. You can still look for a solution even if you aren't in time to make it on the leaderboard!
Also, if you are interested in these scenarios in general, you can subscribe to the D&D. Sci tag (click the 'Subscribe' button on that page) and you'll get notifications whenever a new one is posted.
Your 'accidents still happen' link shows:
One airship accident worldwide in the past 5 years, in Brazil.
The last airship accident in the US was in 2017.
The last airship accident fatality anywhere in the world was in 2011 in Germany.
The last airship accident fatality in the US was in 1986.
I think that this compares favorably with very nearly everything.
How many of those green lights could the Wright Brothers have shown you?
You can correct it in the dataset going forward, but you shouldn't go back and correct it historically. To see why, imagine this simplified world:
One particularly perfidious example of this problem comes when incorrect data is 'corrected' to be more accurate.
A fictionalized conversation:
...Data Vendor: We've heard that Enron [1]falsified their revenue data[2]. They claimed to make eleven trillion dollars last year, and we put that in our data at the time, but on closer examination their total revenue was six dollars and one Angolan Kwanza, worth one-tenth of a penny.
Me: Oh my! Thank you for letting us know.
DV: We've corrected Enron's historical data in our database to reflect this upd-
Can you help me see this point? Why not correct it in the dataset? (Assuming that the dataset hasn't yet been used to train any models)
Success indeed, young Data Scientist! Archmage Anachronos thanks you for your aid, which will surely redound to the benefit of all humanity! (hehehe)
LIES! (Edit: post did arrive, just late, accusation downgraded from LIES to EXCESSIVE OPTIMISM REGARDING TIMELINES)
MUWAHAHAHAHA! YOU FOOL!
ahem
That is to say, I'm glad to have you playing, I enjoy seeing solutions even after scenarios are finished. (And I think you're being a bit hard on yourself, I think simon is the only one who actually independently noticed the trick.)
Petrov Day Tracker:
this scenario had no take-the-site-down option
I assume that this is primarily directed at me for this comment, but if so, I strongly disagree.
Security by obscurity does not in fact work well. I do not think it is realistic to hope that none of the ten generals look at the incentives they've been given and notice that their reward for nuking is 3x their penalty for being nuked. I do think it's realistic to make sure it is common knowledge that the generals' incentives are drastically misaligned with the citizens' incentives, and to try to do something about that.
(Honestly I think that I dis...
Eeeesh. I know I've been calling for a reign of terror with heads on spikes and all that, but I think that seems like going a bit too far.
Yes, we're working on aligning incentives upthread, but for some silly reason the admins don't want us starting a reign of terror.
I have. I think that overall Les Mis is rather more favorable to revolutionaries than I am. For one thing, it wants us to ignore the fact that we know what will happen when Enjolras's ideological successors eventually succeed, and that it will not be good.
(The fact that you're using the word 'watched' makes me suspect that you may have seen the movie, which is honestly a large downgrade from the musical.)
During WWII, the CIA produced and distributed an entire manual (well worth reading) about how workers could conduct deniable sabotage in the German-occupied territories.
...(11) General Interference with Organizations and Production
(a) Organizations and Conferences
- Insist on doing everything through "channels." Never permit short-cuts to be taken in order to expedite decisions.
- Make speeches, talk as frequently as possible and at great length. Illustrate your points by long anecdotes and accounts of personal experiences. Neve
Accepting a governmental monopoly on violence for the sake of avoiding anarchy is valuable to the extent that the government is performing better than anarchy. This is usually true, but stops being true when the government starts trying to start a nuclear war.
If the designers of Petrov Day are allowed to offer arbitrary 1k-karma incentives to generals to nuke people, but the citizens are not allowed to impose their own incentives, that creates an obvious power issue. Surely 'you randomly get +1k karma for nuking people' is a larger moderation problem than 'you get -1k karma for angering large numbers of other users'.
No, wait, that was the wrong way to put it...
...Do you hear the people sing, singing the song of angry men
It is the music of a people who will not be nuked again
The next time some generals decide
Such is life under a government. We have the monopoly on violence.This does unfortunately often imply power issues, but probably still better than anarchy and karma wars in the streets.
CITIZENS! YOU ARE BETRAYED!
Your foolish 'leaders' have given your generals an incentive scheme that encourages them to risk you being nuked for their glory.
I call on all citizens of EastWrong and WestWrong to commit to pursuing vengeance against their generals[1] if and only if your side ends up being nuked. Only thus can we align incentives among those who bear the power of life and death!
For freedom! For prosperity! And for not being nuked!
By mass-downvoting all their posts once their identities are revealed.
Lol, I mean, kind of fair, but mass-downvoting is still against the rules and we'll take moderation action against people who do (though writing angry comments isn't).
Launching nukes is one thing, but downvoting posts that don't deserve it? I'm not sure I want to retaliate that strongly.
The best LW Petrov Day morals are the inadvertent ones. My favorite was 2022, when we learned that there is more to fear from poorly written code launching nukes by accident than from villains launching nukes deliberately. Perhaps this year we will learn something about the importance of designing reasonable prosocial incentives.
Why is the benefit of nuking to generals larger than the cost of nuking to the other side's generals?
It is possible with precommitments under the current scheme for the two sides' generals to agree to flip a coin, have the winning side nuke the losing side, and have the losing side not retaliate. In expectation, this gives the generals each (1000-300)/2 = +350 karma.
I don't think that's a realistic payoff matrix.
The generals have bunkers and lots of stockpiles, they'll be fine. They might also find nuclear war somewhat exciting. How bad is a life lived as a king of the wasteland really compared to the glory of world domination?
See also:
I very good point. Especially after reading your other comment I wonder if this is deliberate.
The payoff matrix for the generals suggests that in a one-way attack the winning generals win more than the losers loose. Hence your coin toss plan. But, for the civilians it is the other way around. (+25 for winning, but -50 for loosing).
I suspect it may be some kind of message about how the generals launching the nuclear war have different incentives to the civilians, as the generals may place a higher value on victory, and are more likely to access bunkers and so on.
Eliezer, this is what you get for not writing up the planecrash threat lecture thread. We'll keep bothering you with things like this until you give in to our threats and write it.
Splitting out 'eating out' and 'food at home' is good, but not the whole story due to the rise of delivery.
I believe the snarky phrasing is "Inflation is bad? Or you ordered a private taxi for your burrito?"
Doesn't that just make it even more confusing? I guess we also buy taxis for our groceries, but the overhead is much lower when you're buying hundreds of dollars worth of groceries instead of a $10 burrito. Plus, these prices all tracked each other from 2000-2010, but Instacart didn't even exist until 2012.
I don't actually think 'Alice gets half the money' is the fair allocation in your example.
Imagine Alice and Bob splitting a pile of 100 tokens, which either of them can exchange for $10M each. It seems obvious that the fair split here involves each of them ending up with $500M.
To say that the fair split in your example is for each player to end up with $500M is to place literally zero value on 'token-exchange rate', which seems unlikely to be the right resolution.
Update: the market has resolved to Edmundo Gonzales (not to Maduro). If you think this is not the right resolution given the wording, I agree with you. But if you think the wording was clear and unambiguous to being with, I think this should suggest otherwise.
So the original devs have all long since left the firm, and as I'm sure you've discovered the documentation is far from complete.
With that said, it sounds like you haven't read the original requirements doc, and so you've misunderstood what DEMOCRACY was built for. It's not a control subsystem, it's a sacrificial sandbox environment to partition attackers away from the rest of the system and limit the damage they can do to vital components like ECONOMY and HUMAN_RIGHTS.
The 'Constitution.doc' file specifies the parameters of the sandboxed environment,...
(Non-expert opinion).
For a robot to pass the Turing Test turned out to be less a question about the robot and more a question about the human.
Against expert judges, I still think LLMs fail the Turing Test. I don't think current AI can pretend to be a competent human in an extended conversation with another competent human.
Again non-expert judges, I think the Turing Test was technically passed long long before LLMs: didn't some of the users of ELIZA think and act like it was human? And how does that make you feel?
I agree that that would probably be the reasonable thing to do at this point. However, that's not actually what the Polymarket market has done - it's still Disputed, and in fact Maduro has traded down to 81%.
And I think a large portion of the reason why this has happened is poor decision-making in how the question was initially worded.
Edited to add: Maduro is now down to 63% in the market, probably because the US government announced that it thinks his opponent won? No, that's not an official Venezuelan source. But it seems to have moved the market anyway.
My sense is this security would be fine? Is there a big issue with this being a security?
In the sense that it would find a market-clearing price, it's fine. But in the sense of its price movements being informative...well. Say the price of that security has just dropped by 10%.
Is the market reflecting that bad news about Google's new AI model is likely to reflect poor long-term prospects? Is it indicating that increased regulatory scrutiny is likely to be bad for Google's profitability?
Or is Sundar Pichai going bald?
I agree that most markets resolve successfully, but think we might not be on the same page on how big a deal it is for 5% of markets to end up ambiguous.
If someone offered you a security with 95% odds to track Google stock performance and 5% odds to instead track how many hairs were on Sundar Pichai's head, this would not be a great security! A stock market that worked like that would not be a great stock market!
In particular:
I don't think 'official information from Venezuela' is fully unambiguous. What should happen if the CNE declares Maduro the winner, but Venezuela's National Assembly refuses to acknowledge Maduro's win and appoints someone else to the presidency? This is not a pure hypothetical, this literally happened in 2018! Do we need to wait on resolving the market until we see whether that happens again?
I agree that resolving that market to Maduro is probably the right approach right now. But I don't actually think the market description is en...
The phrase "Robbers don't need to rob people" is generally accurate.
But saying "Robbers don't need to rob people," and writing a long argument in support of that, makes it seem like you might be confused about the thought processes of robbers.
If robbers had a lot of cultural cachet and there were widely-disseminated arguments implying that robbers need to rob people, I think there would be a lot of value in a piece narrowly arguing that robbers don't need to rob people, regardless of your views on their thought processes.