Seeking geeks interested in bioinformatics

17 bokov 22 June 2015 01:44PM

I work at a small but feisty research team whose focus is biomedical informatics, i.e. mining biomedical data. Especially anonymized hospital records pooled over multiple healthcare networks. My personal interest is ultimately life-extension, and my colleagues are warming up to the idea as well. But the short-term goal that will be useful many different research areas is building infrastructure to massively accelerate hypothesis testing on and modelling of retrospective human data.

 

We have a job posting here (permanent, non-faculty, full-time, benefits):

https://www.uthscsajobs.com/postings/3113

 

If you can program, want to work in an academic research setting, and can relocate to San Antonio, TX, I invite you to apply. Thanks.

Note: The first step of the recruitment process will be a coding challenge, which will include an arithmetical or string-manipulation problem to solve in real-time using a language and developer tools of your choice.

edit: If you tried applying and were unable to access the posting, it's because the link has changed, our HR has an automated process that periodically expires the links for some reason. I have now updated the job post link.

A hypothetical question for investors

3 bokov 03 December 2014 04:39PM

Let's suppose you start with $1000 to invest, and the only thing you can invest it in is stock ABC. You are only permitted to occupy two states:

* All assets in cash

* All assets in stock ABC

You incur a $2 transaction fee every time you buy or sell.

Kind of annoying limitations to operate under. But you have a powerful advantage as well. You have a perfect crystal ball that each day gives you the [probability density function](http://en.wikipedia.org/wiki/Probability_density_function) of ABC's closing price for the following day (but no further ahead in time).

What would be an optimal decision rule for when to buy and sell?

 

Request for suggestions: ageing and data-mining

14 bokov 24 November 2014 11:38PM

Imagine you had the following at your disposal:

  • A Ph.D. in a biological science, with a fair amount of reading and wet-lab work under your belt on the topic of aging and longevity (but in hindsight, nothing that turned out to leverage any real mechanistic insights into aging).
  • A M.S. in statistics. Sadly, the non-Bayesian kind for the most part, but along the way acquired the meta-skills necessary to read and understand most quantitative papers with life-science applications.
  • Love of programming and data, the ability to learn most new computer languages in a couple of weeks, and at least 8 years spent hacking R code.
  • Research access to large amounts of anonymized patient data.
  • Optimistically, two decades remaining in which to make it all count.

Imagine that your goal were to slow or prevent biological aging...

  1. What would be the specific questions you would try to tackle first?
  2. What additional skills would you add to your toolkit?
  3. How would you allocate your limited time between the research questions in #1 and the acquisition of new skills in #2?

Thanks for your input.


Update

I thank everyone for their input and apologize for how long it has taken me to post an update.

I met with Aubrey de Grey and he recommended using the anonymized patient data to look for novel uses for already-prescribed drugs. He also suggested I do a comparison of existing longitudinal studies (e.g. Framingham) and the equivalent data elements from our data warehouse. I asked him that if he runs into any researchers with promising theories or methods but for a massive human dataset to test them on, to send them my way.

My original question was a bit to broad in retrospect: I should have focused more on how to best leverage the capabilities my project already has in place rather than a more general "what should I do with myself" kind of appeal. On the other hand, at the time I might have been less confident about the project's success than I am now. Though the conversation immediately went off into prospective experiments rather than analyzing existing data, there were some great ideas there that may yet become practical to implement.

At any rate, a lot of this has been overcome by events. In the last six months I realized that before we even get to the bifurcation point between longevity and other research areas, there are a crapload of technical, logistical, and organizational problems to solve. I no longer have any doubt that these real problems are worth solving, my team is well positioned to solve many of them, and the solutions will significantly accelerate research in many areas including longevity. We have institutional support, we have a credible revenue stream, and no shortage of promising directions to pursue. The limiting factor now is people-hours. So, we are recruiting.

Thanks again to everyone for their feedback.

 

Blind Spot: Malthusian Crunch

4 bokov 18 October 2013 01:48PM

In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.

TLDR mathy version:

let f(m,t) be the population that can be supported using the fraction of Earth's theoretical resource limit m we can exploit at technology level t  

let t = k(x) be the technology level at year

let p(x) be population at year x  

What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?

Long version:

Here I would like to explore the evidence for and against the possibility that the following assertions are true:

  1. Without human intervention, the carrying capacity of our environment (broadly defined1) is finite while there are no *intrinsic* limits on population growth.
  2. Therefore, if the carrying capacity of our environment is not extended at a sufficient rate to outpace population growth and/or population growth does not slow to a sufficient level that carrying capacity can keep up, carrying capacity will eventually become the limit on population growth.
  3. Abundant data from zoology show that the mechanisms by which carrying capacity limits population growth include starvation, epidemics, and violent competition for resources. If the momentum of population growth carries it past the carrying capacity an overshoot occurs, meaning that the population size doesn't just remain at a sustainable level but rather plummets drastically, sometimes to the point of extinction.
  4. The above three assertions imply that human intervention (by expanding the carrying capacity of our environment in various ways and by limiting our birth-rates in various ways) are what have to rely on to prevent the above scenario, let's call it the Malthusian Crunch.
  5. Just as the Nazis have discredited eugenics, mainstream environmentalists have discredited (at least among rationalists) the concept of finite carrying capacity by giving it a cultish stigma. Moreover, solutions that rely on sweeping, heavy-handed regulation have recieved so much attention (perhaps because the chain of causality is easier to understand) that to many people they seem like the *only* solutions. Finding these solutions unpalatable, they instead reject the problem itself. And by they, I mean us.
  6. The alternative most environmentalists either ignore or outright oppose is deliberately trying to accelerate the rate of technological advancement to increase the "safety zone" between expansion of carrying capacity and population growth. Moreover, we are close to a level of technology that would allow us to start colonizing the rest of the solar system. Obviously any given niche within the solar system will have its own finite carrying capacity, but it will be many orders of magnitude higher than that of Earth alone. Expanding into those niches won't prevent die-offs on Earth, but will at least be a partial hedge against total extinction and a necessary step toward eventual expansion to other star systems.

Please note: I'm not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:

Predictions about the dangers of nanotech made in the 1980's shown no signs of coming true. Yet, there is no known logical or physical reason why they can't come true, so we don't ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.

Shouldn't we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?

 

Footnotes:

1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).

2: This is a hard question. I'm not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I'm asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that's worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don't have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.

 

US default as a risk to mitigate

2 bokov 15 October 2013 04:41PM

Update: Thanks everyone for the continuing thought-provoking discussion. I intend to post my decision spreadsheet, and still am looking for suggestions on where to do so. It might come in handy come February. A discussion that I find interesting has branched off on the topic of technological progress versus Malthusian Crunch, and I started a new article on that over here.

 

I would like to kick off a discussion about optimal strategies to prepare for the event that the US government fails to raise the debt ceiling before the US Treasury Department's "extraordinary measures" are exhausted, which is estimated to happen sometime between October 17th and mid-November.

This is a risk *caused* by politics, but my goal is to talk about bracing against the event itself if it happens, not the underlying politics. If you want to debate Obama-care, who is at fault, or how likely a US default actually is, please start a separate discussion.

I consider this to be an indirect existential risk because if it kicks off a national or global recession, it will likely slow or halt research and philanthropic efforts at mitigating longer-term existential risks.

Since there are obvious associations between unemployment/poverty and crime, civil unrest, and poor health, a global recession is likely to be to some extent a personal existential risk to those living in the United States or countries that have trade links with the United States.

I notice that the markets do not seem to be anticipating a bad outcome. But I heard one analyst advance the theory that investors simply don't believe the government can (his words) "be that stupid". I imagine there is more than a touch of availability bias as well-- breaching the debt ceiling might, even for fund managers who harbor no illusions about the wisdom of politicians, be up there with science-fictional scenarios like asteroid impact, peak oil, grey goo, global warming, and terrorist attacks. Moreover, there may be a dangerous feedback loop as the politicians in turn watch the stock indexes and conclude that "the market says there is nothing to worry about".

So, I would like to hear what folks who are making contingency plans are doing. Especially people who have training or experience in economics and finance. What do you think the closest parallels in 20th/21st century history are for what the worst case scenario for a US government default would be like? Is there anything you would have done differently if you had known the date for the start of the 2008 recession with a +/- 2 week confidence interval, starting in two days? Or, if you did call it ahead of time, what are you glad you did?

Rationalizing: looking for the wrong kind of loopholes

4 bokov 26 September 2013 03:05PM

This morning I found what I think is an interesting way to explain rationalizing to my son, and I thought I'd share it:

  • Physical reality has rules that you can game to your advantage (natural laws).
  • People have another set of rules that you can game to your advantage (preferences, biases, cultural norms).
  • Rationalization is when you are trying to overcome an obstacle based in physical reality by trying to game human rules.

 

Two subsequent thoughts that ocurred to me:

  • If you're rationalizing-- the magic excuse fairy might not be there to hear you, but your subconsciousness is. And you will often convince your subconsciousness... to believe that the problem you're trying to solve is impossible, that it won't do any good anyway, that people are out to get you, and any number of other non-factual things that are directly antagonistic to your goals. This is why rationalizing is a bad habit.
  • By this definition, the opposite of rationalizing is using the constraints physical reality to convince people that you are right. This is something you _can_ use effectively and should always try to do. It's called presenting evidence.

What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality?

17 bokov 25 September 2013 11:09PM

Let's say Bob's terminal value is to travel back in time and ride a dinosaur.

It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.

There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.

So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.

But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?

PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?

Patternist friendly AI risk

1 bokov 12 September 2013 01:00PM

It seems to me that most AI researchers on this site are patternists in the sense of believing that the anti-zombie principle necessarily implies:

1. That it will ever become possible *in practice* to create uploads or sims that are close enough to our physical instantiations that their utility to us would be interchangeable with that of our physical instantiations.

2. That we know (or will know) enough about the brain to know when this threshold is reached.

 

But, like any rationalists extrapolating from unknown unknowns... or heck, extrapolating from anything... we must admit that one or both of the above statements could be wrong without also making friendly AI impossible. What would be the consequences of such error?

I submit that one such consequence could be an FAI that is also wrong on these issues but not only do we fail to check for such a failure mode, it actually looks to us like what we would expect the right answer to look because we are making the same error.

If simulation/uploading really does preserve what we value about our lives then the safest course of action is to encourage as many people to upload as possible. It would also imply that efforts to solve the problem of mortality by physical means will at best be given an even lower priority than they are now, or at worst cease altogether because they would seem to be a waste of resources.

 

Result: people continue to die and nobody including the AI notices, except now they have no hope of reprieve because they think the problem is already solved.

Pessimistic Result: uploads are so widespread that humanity quietly goes extinct, cheering themselves onward the whole time

Really Pessimistic Result: what replaces humanity are zombies, not in the qualia sense but in the real sense that there is some relevant chemical/physical process that is not being simulated because we didn't realize it was relevant or hadn't noticed it in the first place.

 

Possible Safeguards:

 

* Insist on quantum level accuracy (yeah right)

 

* Take seriously the general scenario of your FAI going wrong because you are wrong in the same way and fail to notice the problem.

 

* Be as cautious about destructive uploads as you would be about, say, molecular nanotech.

 

* Make sure you knowledge of neuroscience is at least as good as you knowledge of computer science and decision theory before you advocate digital immortality as anything more than an intriguing idea that might not turn out to be impossible.

 

Supposing you inherited an AI project...

-5 bokov 04 September 2013 08:07AM

Supposing you have been recruited to be the main developer on an AI project. The previous developer died in a car crash and left behind an unfinished AI. It consists of:

A. A thoroughly documented scripting language specification that appears to be capable of representing any real-life program as a network diagram so long as you can provide the following:

 A.1. A node within the network whose value you want to maximize or minimize.

 A.2. Conversion modules that transform data about the real-world phenomena your network represents into a form that the program can read.

B. Source code from which a program can be compiled that will read scripts in the above language. The program outputs a set of values for each node that will optimize the output (you can optionally specify which nodes can and cannot be directly altered, and the granularity with which they can be altered).

It gives remarkably accurate answers for well-formulated questions. Where there is a theoretical limit to the accuracy of an answer to a particular type of question, its answer usually comes close to that limit, plus or minus some tiny rounding error.

 

Given that, what is the minimum set of additional features you believe would absolutely have to be implemented before this program can be enlisted to save the world and make everyone live happily forever? Try to be as specific as possible.

View more: Next