Comment author: rmoehn 19 July 2016 01:42:05AM 1 point [-]

Thank you!

After graduating, why would you need to be based in Kagoshima?

I need to be based in Kagoshima for pretty strong personal reasons. Sorry for not providing details. If you really need them, I can tell you more via PM.

Ah, you write »after graduating«? Sorry for not providing that detail: research students in Japan are not working on a master's or PhD. They're just hanging around studying or doing research and hopefully learn something during that time.

Have you taken a look at the content on MIRI's to practice AI safety research?

Yes, I've read all of the agenda papers and some more.

Have you considered applying to visit AI safety researchers at MIRI or FHI? That would help you to figure out where your interests and theirs overlap, and to consider how you might contribute.

I applied for the MIRI Summer Fellows Programme, which I didn't get into by a small margin, and CFAR's Workshop on AI Safety Strategy, which I also didn't get into. They told me they might put me in the next one. That would definitely help me with my questions, but I thought it's better to start early, so I asked here.

If you're not eligible to visit for some reason, that might imply that you're further from being useful than you thought.

I am at the very beginning of learning ML and AI and therefore kind of far from being useful. I know this. But I'm quite good at maths and computer science and a range of other things, so I thought contributing to AI safety research shouldn't be too far to go. It will just take time. (Just as a master's programme would take time, for example.) The hard part is to get hold of money to sustain myself during that time.

I might be useful for other things than research directly, such as support software development, teaching, writing, outreach, organizing. I haven't done much teaching, outreach and organization, but I would be interested to try more.

Comment author: RyanCarey 19 July 2016 08:44:49AM 1 point [-]

I don't really know of any ai researchers in our extended network out of some dozens who've managed to be taken very seriously without being colocated with other top researchers, so without knowing more, it still seems moderately likely to me that the best plan involves doing something like earning while practising math, or studying a PhD, with the intent to move in 2-3 years, depending on when you can't move until.

Otherwise, it seems like you're doing the right things, but until you put out some papers or something, I think I'd sooner direct funding to projects among the FLI grantees. I'd note that most of the credible LW/EA researchers are doing PhDs and postdocs or taking on AI safety research roles in industry, and recieve funds through those avenues and it seems to me like those would also be the next steps for you in your career.

If you had a very new idea that you had an extraordinary comparative advantage at exploring, then it's not concievable that you could be among the most eligible GCR-reduction researchers for funding but you'd have to say a lot more.

Comment author: RyanCarey 18 July 2016 06:35:16AM *  4 points [-]

I would endorse what John Maxwell has said but would be interested to hear more details.

After graduating, why would you need to be based in Kagoshima? Most postdocs travel around the world a lot in order to be with the leading experts and x-risk research is no different.

Have you taken a look at the content on MIRI's Agent Foundations forum?

Have you considered running a MIRIx workshop to practice AI safety research?

Have you considered applying to visit AI safety researchers at MIRI or FHI? That would help you to figure out where your interests and theirs overlap, and to consider how you might contribute. If you're not eligible to visit for some reason, that might imply that you're further from being useful than you thought.

Good luck!

Comment author: johnlawrenceaspden 20 May 2016 10:13:01PM *  0 points [-]

That's what I was expecting, but 2.5 isn't suppressed, it's actually quite high compared to the average for healthy people, (or at least normal, depending on what you think normal is). And roughly the same as it was at the start of all this. And both the free hormones look low. You'd think adding a fair bit of thyroid to a healthy system would have bumped up the free hormones and maybe lowered TSH to somewhere like the hyperthyroid range.

What's really weird is that I've tripled the dose of NDT since the last time I had blood drawn, and my TSH has gone up slightly in response. I thought I'd be seriously suppressing my own system by now.

It's possible that I've just developed a primary gland failure, but that's weird because there was no sign of it when I first showed severe symptoms.

Comment author: RyanCarey 21 May 2016 08:45:06AM 0 points [-]

Ok so your TSH is normal and your T3/T4 are low in the normal range because you've replaced them with some T1/T2. Every value is in the normal range. Problem?

It makes no sense at all to call it pituitary failure (central hypothyroidism) - that would imply low TSH. You could argue that it's successfully medicated peripheral hypothyroidism if anything, though that's a stretch.

Comment author: johnlawrenceaspden 19 May 2016 09:47:35PM 0 points [-]

At the fourth attempt, my doctor managed to get the local lab to test TSH,T3 and T4 simultaneously. He had to ring them up and ask them in person, apparently. It turns out that I've currently got TSH~2.5, and FT4,FT3 low-in-range. Given that that looks like central hypothyroidism, and that's under the influence of 1 grain/day of desiccated thyroid, we've decided we that we have no clue, and I'm carrying on messing around with random thyroid drugs aiming for relief of symptoms (which are all gone, but I keep having to up the dose to keep it so).

Basically Christ knows. If I'm not medically unique, there's something very funny going on.

Comment author: RyanCarey 20 May 2016 02:51:02PM 1 point [-]

Aren't you just taking thyroid hormones analogues (not T3/T4) that are - as expected - suppressing the pituitary production of TSH?

Comment author: ChristianKl 12 May 2016 11:09:17AM 2 points [-]

What kind of shock do you have in mind?

Comment author: RyanCarey 12 May 2016 12:27:12PM 2 points [-]

I don't have a specific one in mind but nuclear winter, or a catastrophic problem spreading through the atmosphere, or something bioengineered would be concievable.

Comment author: ChristianKl 10 May 2016 07:22:01PM 4 points [-]

I have a heard time imaging a scenario where an ISS style space station would allow disaster recovery.

Comment author: RyanCarey 12 May 2016 03:28:45AM *  1 point [-]

Suppose some kind of shock makes Earth briefly uninhabitable, and things don't work out with people who immediately emerge from bunkers and submarines, but 100 people can come down again from space shortly afterwards and recolonise it.

Comment author: Stefan_Schubert 10 May 2016 05:08:53PM 0 points [-]

Sure, I guess my question was whether you'd think that it'd be possible to do this in a way that would resonate with readers. Would they find the estimates of quality, or level of postmodernism, intuitively plausible?

My hunch was that the classification would primarily be based on patterns of word use, but you're right that it would probably be fruitful to use at patterns of citations.

Comment author: RyanCarey 11 May 2016 03:00:17AM 2 points [-]

If you get a well labelled dataset, I think this is pretty thoroughly within the scope of current machine learning technologies, but that means spending perhaps hundreds of hours labelling papers as a certain amount postmodern out of 100. If you're trying to single out the postmodernism that you're convinced is total BS, then that's more complex. Doable but you need to make the case to me about why it would be worthwhile, and what exactly your aim would be.

Comment author: turchin 10 May 2016 10:33:10PM *  4 points [-]

I created a somewhat similar plan of x-risks prevention, which is here: http://immortality-roadmap.com/globriskeng.pdf But in it rising robustness is only part of whole plan, and do not include many of ideas from your plan, which are present in other parts of the map.

In my plan (Plan A3 in the map) robustness consist of several steps:

Step One. Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity of human beings and habitats • Universal methods of catastrophe prevention (resistant structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including: - temporary shelters, - air and water cleaning systems, - radiation meters, gas masks, - medical kits - mass education

Step Two. Useful ideas to limit the scale of a catastrophe • Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe: - technical instruments for implementing quarantine, - improve the capacity for rapid production of vaccines in response to emerging threats - grow stockpiles of important vaccines • Increase preparation time by improving monitoring and early detection technologies: - support general research on the magnitude of biosecurity risks and opportunities to reduce them - improve and connect disease surveillance systems so that novel threats can be detected and responded to more quickly • Worldwide x-risk prevention exercises • The ability to quickly adapt to new risks and envision them in advance

Step Three. High-speed Tech Development needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech, Friendly AI) • High speed technical progress helps to overcome slow process of resource depletion • Invest more in defensive technologies than in offensive

Step Four. Timely achievement of immortality on highest possible level • Nanotech-based immortal body • Diversification of humanity into several successor species capable of living in space • Mind uploading • Integration with AI

Comment author: RyanCarey 11 May 2016 02:56:24AM 2 points [-]

A lot of strong suggestions there - I've added subs for example.

Re how to plot a course of action for mitigating these risks, I guess GCRI is doing a lot of the theoretical work on robustness, and they could be augmented by more political lobbying and startup projects?

Comment author: fubarobfusco 11 May 2016 01:38:38AM *  2 points [-]
  • Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

If I blow up the planet, neither my insurance nor your lawsuit is going to help anything. Which is to say, this proposal is just a wealth transfer to insurance companies, since they never have to pay out.

Comment author: RyanCarey 11 May 2016 02:43:52AM 2 points [-]

If you're running a synthetic biology company, and have to be insured against major pandemics, you may need more risk reduction measures to stay profitable, reducing existential risk, precisely because many pandemics can bring on costs without causing extinction.

Comment author: Stefan_Schubert 10 May 2016 10:26:20AM *  3 points [-]

deleted

Comment author: RyanCarey 10 May 2016 04:21:05PM 0 points [-]

If you had a million labelled postmodern and non-postmodern papers, you could decently identify them.

You could categorise most papers with fewer labels using citation graphs.

You can recommend papers, as you would Amazon books with a recommender system (using ratings).

There are hundreds of ways to apply machine learning to academic articles; it's a matter of deciding what you want the machine learning to do.

View more: Next