Um, it is, isn't it?
I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.
Finally, someone with a clue about biology tells it like it is about brain uploading
http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/
In reading this, suggest being on guard against own impulse to find excuses to dismiss the arguments presented because they call into question some beliefs that seem to be deeply held by many in this community.
It depends. Writing a paper is not a realtime activity. Answering a free-response question can be. Proving a complex theorem is not a realtime activity, solving a basic math problem can be. It's a matter of calibrating the question difficulty so that is can be answered within the (soft) time-limits of an interview. Part of that calibration is letting the applicant "choose their weapon". Another part of it is letting them use the internet to look up anything they need to.
Our lead dev has passed this test, as has my summer grad student. There are two applicants being called back for second interviews (but the position is still open and it is not too late) who passed during their first interviews. Just to make sure, I first gave it to my 14 year old son and he nailed it in under half an hour.
Correct, this is a staff programmer posting. Not faculty or post-doc (though when/if we do open a post-doc position, we'll be doing coding tests for that also, due to recent experiences).
it's not strictly an AI problem-- any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.
individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds
in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for
(Note that transforming a sufficiently well specified statistical model into a lossless data compressor is a solved problem, and the solution is called arithmetic encoding - I can give you my implementation, or you can find one on the web.
The unsolved problems are the ones hiding behind the token "sufficiently well specified statistical model".
That said, thanks for the pointer to arithmetic encoding, that may be useful in the future.
The point isn't understanding Bayes theorem. The point is methods that use Bayes theorem. My own statistics prof said that a lot of medical people don't use Bayes because it usually leads to more complicated math.
To me, the biggest problem with Bayes theorem or any other fundamental statistical concept, frequentist or not, is adapting it to specific, complex, real-life problems and finding ways to test its validity under real-world constraints. This tends to require a thorough understanding of both statistics and the problem domain.
That's not the skill that's taught in a statistics degree.
Not explicitly, no. My only evidence is anecdotal. The statisticians and programmers I've talked to appear to overall be more rigorous in their thinking than biologists. Or at least better able to rigorously articulate their ideas (the Achilles heel of statisticians and programmers is that they systematically underestimate the complexity of biological systems, but that's a different topic). I found that my own thinking became more organized and thorough over the course of my statistical training.
Here is what you can do to make your post better:
At the top put a very short, concise TLDR with NO IMAGES.
More data. It sounds like you did a pretty rigorous deep-dive into this stuff. Instead of making assertions like "These projects usually take one of a few forms ..." or "There appears to be almost nothing in this general pattern before January 2025" show the raw data! I get that you need to protect the privacy of the posters, but you could at least have a scrubbed table with date, anonymized user IDs, name of subreddit, and maybe tags corresponding to various features you described in your piece. Or at least show the summary statistics and the code you used to calculate them. Social media can very much be analyzed in a replicable manner.
Fewer anecdotes. The images you embed disrupt the flow of your writing. Since you're anonymizing them anyway, why not go ahead and quote them as text? It's not like an image is somehow more authentic than quoted text. Also, as per above, maybe move them to an appendix at the bottom. The focus should be on the scope and the scale of this phenomenon. Then, if a reader is interested enough to pursue further they can choose to read the semi incomprehensible AI co-authored stuff in the appendix.
Without independently verifiable evidence, I expect there to be a low probability of this being a widespread trend at this time. However, it does point to something we should probably prepare for-- mystically inclined people who don't understand AI building cults around it and possibly creating a counter-movement to the AI-alignment movement as if that work wasn't already hard enough.
So how do we nip this shit in the bud, people?