I see... so trolling by patenting something akin to convolutional neural networks wouldn't work because you can't tell what's powering a service unless the company building it tells you.
Maybe something on the lines of "service that does automatic text translation" or "car that drives itself" (obviously not these, since a patent with so much prior art would never get granted) would be a thing that you could fight over?
Hi! I wrote a summary with some of my thoughts in this post as part of an ongoing effort to stop sucking at researching stuff. This article was a big help, thank you!
I'm glad you enjoyed it! I agree that more should be done. Just listing the specific search advice on the new table of contents would help a lot.
I'm gonna do the work, I promise. I'm just working up the nerve. Saying, in effect, "this experienced professional should have done his work better, let me show you how" is scary as balls.
First of all: thank you for setting up the problem, I had lots of fun!
This one reminded me a lot of D&D.Sci 1, in that the main difficulty I encountered was the curse of dimensionality. The space had lots of dimensions so I was data-starved when considering complex hypotheses (performance of individual decks, for instance). Contrast with Voyages of the Grey Swan, where the main difficulty is that broad chunks of the data are explicitly censored.
I also noticed that I'm getting less out of active competitions than I was from the archived posts. I'm so co...
I made some progress (right in the nick of time) by...
Massaging the data into a table of every deck we've seen, and whether the deck won its match or lost it (the code is long and boring, so I'm skipping it here), then building the following machinery to quickly analyze restricted subsets of deck-space.
q = "1 <= dragon <= 6 and 1 <= lotus <= 6"
display(decks.query(q).corr()["win"].drop("win").sort_values(ascending=False).plot.bar())
decks.query(q)["win"].agg(["mean", "sum", "count"])
q is used to filter us down to decks that obey the constr
My counterpoints, in broad order of importance:
If good people were liars, that would render the words of good people meaningless as information-theoretic signals, and destroy the ability for good people to coordinate with others or among themselves.
My mental Harry is making a noise. It goes something like Pfwah! Interrogating him a bit more, he seems to think that this argument is a gross mischaracterization of the claims of information theory. If you mostly tell the truth, and people can tell this is the case, then your words convey information in the information-theoretic sense.
EDIT: Now I'm think...
Followed up on this idea and noticed that
A table of winrate as function of number of "evil" cards and "item" cards shows that item cards only benefit evil decks. I considered dragon, emperor, hooligan, minotaur, and pirate to be evil.
Sorry in advance for an entirely too serious comment to a lighthearted post; it made me have thoughts I thought worth sharing. The whole "Karma convertibility" system is funny, but the irony feels slightly off. Society (vague term alert!) does in fact reward popular content with money. Goodhart's law is not "monetizing an economy instantly crashes it". My objections to Karma convertibility, are:
Just the obvious contrarian poke:
Are the decks equally likely? We observe that 412050 decks appear just once, 104483 decks appear twice, etc. Is this distribution compatible with random draws?
There are 342396 rows, with 2 decks each. Solving for the number of valid decks one could make, gives me (straightforward application of counting particle arrangements, imagine you have "coins" to place in "card-type boxes").
Then I just simulated and eyeballed. If I pick 2*342396 random numbers from 1 to 1352078, how many numbers appear just once
Hm. I expected to do terribly on this problem since I hardly exhausted my avenues of research (I didn't even clean up the infohazardous object, despite knowing about it from other's comments). I ended up doing the worst out of everybody, which tracks, but the results are still clustered rather close.
I enjoyed the problem a lot, and I'm very grateful to aphyer for pinging me when he made the problem available. Tragically, I was sick at the time. :P
Brief notes:
I came in pretty late, so I haven't gotten much to share.
I split my analysis into expected-profit-if-obtained and chance-of-obtaining-per-team. It probably isn't true, but assuming that team selection does not directly affect profit simplifies things a lot.
Maybe it is. Feynman's abacus story suggests that he (and colleagues) were familiar with lots of specific numbers and that it matters, somehow. Perhaps I should pick up the habit. Or perhaps that's backwards, and there's some particularly useful skill tree that, as a side effect, results in learning to recognize lots of numbers. Either way, just knowing that this is a common thing among the mathematically inclined is worth knowing.
I see. The spikiness is a tipoff that the numbers are being generated by some simple underlying process. I'm still not clear about why primes, though.
I'm guessing the idea is looking out for multiplicative processes, like looking out for the hump-tail shape of the distribution? Multiplying numbers together is an addition on their multiplicities-of-factors representation, so nd6 can never generate a number with a prime factor of 7 or higher. But I'm not explicitly hearing that as the rationale, so it feels like "primes are bound to show up, just keep an eye out for them".
Certainly for crux-hunting, you need two people who are fundamentally collaborating.
It has been pointed out to me that therapy is analogous to depositions in a way relevant to your argument: in therapy both patient and therapist are there with the stated purpose of resolving emotional tensions in the patient, but the patient can prove unhelpful or actively oppose the therapist's probes.
I think this is an example of an interaction that is collaborative in principle, but where techniques designed for adversarial interactions may do good.
"I don't know" can be a accurate. I think the advice is intended against people playing dumb, like Bill Clinton's "depends on what the meaning of the word 'is' is" or this witness denying knowledge of what a photocopier is. I know I've pulled this bullshit on myself at least once.
Using strategies that still work when some people act adverserial with you and try to deceive you is in line with being rational.
I think this gets close to the insight that motivated my post: a part of ourselves often tries to curl into a ball and deny reality to avoid emotional stress, interacting with that part of you is kind of adversarial.
This comment did not deserve the downvotes; I agree with asking for disclosure.
It does deserve criticism for tone. "Alarmist and uninformed" and "AGI death cult" are distractingly offensive.
The same argument for disclosure could could have been made by "given that LW's audience has outsized expectations of AI performance" and "it costs little, and could avoid an embarrasing misunderstanding".
To expand on 5:
I may be explaining Scrum for a job interview, and completely forget that the sprint review is a thing. Ask me about the sprint review however, and I can make a cogent case for (or against) the necessity of the dev team being involved (customer interactions are the purview of the project owner! agile methodologies emphasize cutting red tape! or something on those lines).
I use notes as reminders/pointers rather than longform descriptions (adopted from "The Bullet Journal Method", ch 2 "Events"). This helps with three things:
Some time ago I noticed this trend among people I respect on Twitter (motivating examples). It seems to me that there is a consensus view that openness has a damaging effect on discourse.
This view does not seem to stem from the problem addressed by "Well-Kept Gardens Die By Pacifism" and "Evaporative Cooling of Group Beliefs" -the gradual decline of a community due to environmental exposure- but rather from the problem that you percieve: the reputational hazard of public fora.
My current stance on public discourse is that it serves as a discovery mechanism:...
It is a direct response to a quotation from the article, so not really.
I guess I want to be "a normal [...] man wearing a female body like a suit of clothing."
Is that weird? Is that wrong?
Okay, yes, it's obviously weird and wrong, but should I care more about not being weird and wrong, than I do about my deepest most heartfelt desire that I've thought about every day for the last nineteen years?
I know this misses most of the point of the article, but I also believe it's worth pointing out: I don't think a male wanting a female body form is any weirder or wronger that a male wanting to be 2 inches taller, buff, and having 20/20 eyesight.
PS: I did try reducing "weird and wrong" to their components. Result of the excercise: I find the OP uncontroversially "statistically rare" or "heterodox", but neither "viscerally repulsive" nor "morally reprehensible". I can see the value of explicitly reducing complex concepts in the general case, but I'm not sure it was worthwhile for this instance.
Currently grappling with this problem (compsci undergrad). I'm pulling ideas from Allen's Getting Things Done, Carroll's Bullet Journal, and Ahrens' How to take smart notes
It is an excellent game to really get the concepts of priors and subjectively objective probabilities. Try a round and ask "what is the probability that the next card on the deck is a 5?", interesting discussion ensues.
Early COVID response on LW was a generalized "this is a big deal." I can't find the post that originally caught my eye, but I remember hitting the supermarkets in Buenos Aires, stocking up on masks and hand sanitizer, and two weeks later seeing the city freak the hell out. Jacob's "Seeing the Smoke" was a strong early signal, and Zvi's updates often considered explicit numbers.