Wiki Contributions

Comments

Sorted by
bokov30

I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.

bokov20

Finally, someone with a clue about biology tells it like it is about brain uploading

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/

In reading this, suggest being on guard against own impulse to find excuses to dismiss the arguments presented because they call into question some beliefs that seem to be deeply held by many in this community.

bokov00

It depends. Writing a paper is not a realtime activity. Answering a free-response question can be. Proving a complex theorem is not a realtime activity, solving a basic math problem can be. It's a matter of calibrating the question difficulty so that is can be answered within the (soft) time-limits of an interview. Part of that calibration is letting the applicant "choose their weapon". Another part of it is letting them use the internet to look up anything they need to.

Our lead dev has passed this test, as has my summer grad student. There are two applicants being called back for second interviews (but the position is still open and it is not too late) who passed during their first interviews. Just to make sure, I first gave it to my 14 year old son and he nailed it in under half an hour.

bokov40

Correct, this is a staff programmer posting. Not faculty or post-doc (though when/if we do open a post-doc position, we'll be doing coding tests for that also, due to recent experiences).

bokov20

Having a track-record of contributions github/bitbucket/sourceforge/rforge would be a very strong qualification. However, few applicants have this. It's a less stringent requirement that they at least show that they can... you know... program.

bokov00

it's not strictly an AI problem-- any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.

individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds

in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for

bokov00

(Note that transforming a sufficiently well specified statistical model into a lossless data compressor is a solved problem, and the solution is called arithmetic encoding - I can give you my implementation, or you can find one on the web.

The unsolved problems are the ones hiding behind the token "sufficiently well specified statistical model".

That said, thanks for the pointer to arithmetic encoding, that may be useful in the future.

bokov20

The point isn't understanding Bayes theorem. The point is methods that use Bayes theorem. My own statistics prof said that a lot of medical people don't use Bayes because it usually leads to more complicated math.

To me, the biggest problem with Bayes theorem or any other fundamental statistical concept, frequentist or not, is adapting it to specific, complex, real-life problems and finding ways to test its validity under real-world constraints. This tends to require a thorough understanding of both statistics and the problem domain.

That's not the skill that's taught in a statistics degree.

Not explicitly, no. My only evidence is anecdotal. The statisticians and programmers I've talked to appear to overall be more rigorous in their thinking than biologists. Or at least better able to rigorously articulate their ideas (the Achilles heel of statisticians and programmers is that they systematically underestimate the complexity of biological systems, but that's a different topic). I found that my own thinking became more organized and thorough over the course of my statistical training.

bokov00

Also, I'm not sure if this is your intention, but it seems to me that the goal of spending 20 years to slow or prevent aging is a recipe for wasting time. It's such an ambitious goal that so many people are already working on, any one researcher is unlikely to put a measurable dent in it.

In the last five years the NIH (National Institutes of Health) has never spent more than 2% of its budget on aging research. To a first approximation, the availability of grant support is proportional to the number of academic researchers, or at least to the amount of academic research effort being put into a problem. This is evidence against aging already getting enough attention. Especially considering that age is a major risk factor for just about every disease. It's as if we tried to treat AIDS by spending 2% on HIV research and 98% on all the hundreds of opportunistic infections that are the proximal causes any individual AIDS patient's death. I would think that curing several hundred proximal problems is more ambitious than trying to understand and intervene in a few underlying causes.

I have no illusions of single-handedly curing aging in the next two decades. I will be as satisfied as any other stiff in the cryofacility if I manage remove one or more major road-blocks to a practical anti-aging intervention or at least a well-defined and valid mechanistic model of aging.

bokov00

Secondly, you probably shouldn't worry about pursuing a project in which your already-collected data is useless, especially if that data or similar is also available to most other researchers in your field (if not, it would be very useful for you to try to make that data available to others who could do something with it). You're probably more likely to make progress with interesting new data than interesting old data.

This is 'new' data in the sense that it is only now becoming available for research purposes, and if I have my way, it is going to be in a very flexible and analysis-friendly format. It is the core mission of my team to make the data available to researchers (insofar as permitted by law, patients' right to privacy, and contractual obligations to the owners of the data).

If I ran "academia", tool and method development would take at least as much priority as traditional hypothesis-driven research. I think a major take-home message of LW is that hypotheses are a dime a dozen-- what we need are practical ways to rank them and update their rankings on new data. A good tool that lets you crank through thousands of hypotheses is worth a lot more than any individual hypothesis. I have all kinds of fun ideas for tools.

But for the purposes of this post, I'm assuming that I'm stuck with the academia we have, I have access to a large anonymized clinical dataset, and I want to make the best possible use of it (I'll address your points about aging as a choice of topic in a separate reply).

The academia we're stuck with (at least in the biomedical field) effectively requires faculty to have a research plan describable by "Determine whether FOO is true or false" rather than "Create a FOO that does BAR".

So the nobrainer approach would be for me to take the tool I most want to develop, slap some age-related disease onto it as a motivating use-case, and make that my grant. But, this optimizes for the wrong thing-- I don't want to find excuses for engaging in fascinating intellectual exercises. I want to find the problems with the greatest potential to advance human longevity, and then bring my assets to bear on those problems even if the work turns out to be uglier and more tedious than my ideal informatics project.

The reason I'm asking for the LW community's perspective on what's on the critical path to human longevity is that I spent too much time around excuse-driven^H^H^H hypothesis-driven research to put too much faith in my own intuitions about what problems need to be solved.

Load More