All of AM's Comments + Replies

AM10

I agree, and I look forward to seeing how far it goes!

AM10

Thank you for your response!

I think I disagree with your take, but with high uncertainty. The counterfactual world I'm describing will only exist in parallel with the real world where we get increased spread lf the technology + improvements in it.

I agree with your assessment of where the value comes from, but I think a lot it time is spent by office workers on the stuff that could be automated. Since time is fungible, workers can use the saved time on the stuff you say drives value.

AM10

Yes that makes sense to me - I already own some TSMC and Intel. Samsung also has their own fabs I believe so could be another alternative. I suspect the early AGI businesses will just use existing/already deployed hardware though, so I'd say the hardware manufacturing stocks would rise with a delay.

AM50

These are my half baked thoughts on this, putting aside alignment and AI risk completely:

I am betting that large returns will come to those that either own the models underlying AGI (if they are hard / expensive to recreate) and supply them to others via a paid API, or those that build compelling products using AGI.  The 2nd category of companies will probably be startups that pop up once we have AGI, so no way to invest in them right now unless you can invest indirectly in a VC fund you think will be likely to fund those startups. 

For the 1st ca... (read more)

4Jakobovski
What are your thoughts on ASML, TSMC, INTEL and other semiconductor fabs + suppliers? It seems to me that the demand for compute will skyrocket and the companies that manufacture processors (TSMC, Samsung, Intel) and the companies that make tools to make processors (ASML , ??) will expand by orders of magnitude. I think it's a safer bet than Google, MSFT as there are no real competitors and not enough time for competitors to appear.
AM30

Makes sense — I wonder if they've gotten better at it, or if there is alternative hardware that allows sufficient streaming accelerometer data to be useful to you instead.

3lsusr
I don't think it's a hardware issue per se. The problem was along the lines of "if you want to run software around the clock then we require a complete application to run and that application has a bunch of baggage which wears down the battery". Apple Watch might have improved in the years since we tried.
AM40

The Immotouch bracelets look like a really good idea and business model — I am considering ordering one to use to stop my nail biting habit. Have you considered replicating this functionality as an app for an existing wearable, e.g. an Apple Watch? I can definitely see that as being a profitable app that would make the world a better place. Selling your own single-purpose wearable is likely harder.

7lsusr
We tried putting it on an Apple Watch years ago but to get the app to work we had to enable fitness mode and that killed the battery too fast to be useful.
AM10

He doesn't go into this in the book, but I am fairly sure that Harris would agree with your consequentialist take of "acting as if they had free will". I have heard him speak on this matter in a few of his podcasts around "the hard problem of consciousness" with Dennett, Chalmers and a neurosurgeon that I can't find the name of (I remember him being british). 

As I understand him, his view is to not view criminals (or anyone) as "morally bad" for whatever they have done, but to move directly on to figuring out the best possible way to avoid bad things ... (read more)

2JBlack
So long as it doesn't increase the number of people committing crimes once to gain access to cushy luxurious rehabilitation programs, sure! In my personal life, I do notice a tendency that when I do categorize people as being "morally wrong", this is more normally associated with wanting them to become morally right than wanting them punished for being morally wrong. The latter seems just one (probably ineffective) way to achieve the former. I don't seem to see this tendency in many other people around me, so I suspect I'm in a minority. I don't think this relies on any particular position in the free will discussion though. I've seen some people "punish" objects for adversely affecting them by yelling or striking at them, and certainly not for reasons of the object having free will. It seems more of an innate urge than any philosophical belief.
AM130

Helpful resource for whoever ends up doing this: Contraceptive Technology. It's a huge book that summarises almost all effectiveness studies that have been done on contraceptives, including the definitions of perfect and typical use (very important when comparing contraceptives). It also has detailed summaries of side effects, medical interactions, description of method of action and well researched "advantages" and "disadvantages" sections — it's basically what doctors use to decide how to prescribe birth control. 

Source: I have used this book myself in research, I work for a birth control app company.

AM10

Good points! 
Yes this snippet is particularly nonsensical to me

an AI system could be“superintelligent” without any basic humanlike common sense, yet while seamlessly preserving the speed, precision and programmability of a computer

It sounds like their experience with computers has involved them having a lot of "basic humanlike common sense" which is a pretty crazy experience in this case. When I explain what programming is like to kids, I usually say something like "The computer will do exactly exactly exactly what you tell it to, extremely fast. You ... (read more)

AM20

Great and fair critique of this paper! I also enjoyed reading it and would recommend it also just for the history write up.

What do you think is the underlying reason for the bad reasoning in fallacy 4? Is the orthogonal it thesis particularly hard to understand intuitively or has it been covered so badly by media so often that the broad consensus of what it means is now wrong?

3electroswing
Hmmm...the orthogonality thesis is pretty simple to state, so I don't think necessarily that it has been grossly misunderstood. The bad reasoning in Fallacy 4 seems to come from a more general phenomenon with classic AI Safety arguments, where they do hold up, but only with some caveats and/or more precise phrasing. So I guess "bad coverage" could apply to the extent that popular sources don't go in depth enough.  I do think the author presented good summaries of Bostrom's and Russell's viewpoints. But then they immediately jump to a "special sauce" type argument. (Quoting the full thing just in case) I really don't understand where the author is coming from with this. I will admit that the classic paperclip maximizer example is pretty far-fetched, and maybe not the best way to explain the orthogonality thesis to a skeptic. I prefer more down-to-earth examples like, say, a chess bot with plenty of compute to look ahead, but its goal is to protect its pawns at all costs instead of its king. It will pursue its goal intelligently but the goal is silly to us, if what we want is for it to be a good chess player.  I feel like the author's counterargument would make more sense if they framed it as an outer alignment objection like "it's exceedingly difficult to make an AI whose goal is to maximize paperclips unboundedly, with no other human values baked in, because the training data is made by humans". And maybe this is also what their intuition was, and they just picked on the orthogonality thesis since it's connected to the paperclip maximize example and easy to state. Hard to tell.  It would be nice if AI Safety were less disorganized, and had a textbook or something. Then, a researcher would have a hard time learning about the orthogonality thesis without also hearing a refutation of this common objection. But a textbook seems a long way away...
AM20

Great write-up! I work at another femtech company and will share this with some of my colleagues. 

Three thoughts/comments:
1. Clue is very focused on period dates (being a period tracker) and doesn't have very accurate ovulation data. Therefore, cyclical changes in metrics that are affected by ovulation rather than by period will look dimmer due to differences between the length of peoples' Luteal phase.  E.g. the BBT signal would probably be much sharper if the x axis was "days relative to ovulation" rather than "days relative to period", since t... (read more)

Answer by AM30

I don't menstruate, but I work at Natural Cycles (a data-driven birth control app) with data science, and look for these types of patterns a lot — our users are not using Hormonal Birth Control though, so the sample is biased in that way. 

Clue (a popular period tracker app) recently released one of the best studies I've seen on the mood changes over the menstrual cycle, but unfortunately it is not open access. The authors shared a pdf with me after I emailed them though, DM me if you would like a copy (sci-hub doesn't seem to have the paper yet), or email the authors directly, they are very helpful. 

AM10

True, circulatory diseases would be a big win, but do you think the marginal buck there is likely to do as much as a marginal buck focused on aging giving the amount of funding allocated to each? If we add the R&D budgets focused on circulatory diseases to the treatment cost of circulatory diseases (potential profit pool for pharma companies), my intuition says that the number would be ~20-100x the total amount of funding to aging-stopping or -reversing technology. What do you think the ratio would be?

AM10

Definitely a bug! It was my first and only foray into D3.js so there are a lot of bad states you can get into fairly easily. Might rebuild it in something else one day.

3JackH
I think it would be worth rebuilding if you have time. If you do, make sure to share it on Longevity Subreddit. You will get a lot of interest in it there. 
AM40

Love this article.

After reading the The Fable of the Dragon-Tyrant a few years ago after my father died, I went into a deep dive on this and ended up making a calculator, comparing the impact of eliminating various causes of death on average / median lifespan. It's very simplistic, but I found it interesting to use to illustrate how ageing contributes to death:

http://life.analogmantra.com/ 

2jmh
Very interesting.  Assuming we eliminated everything but accidental causes looks like we should live to about 120+ years. I think Sinclair had said that was the expected lifespan as well. Taking the tool at face value, it seem that both personally and socially effort focused on circulatory diseases should give the biggest bang for the buck. Then again I didn't run through different cases of combination so...
2JackH
Really love the app, great work! Just a bug I found (I think it's a bug?) - if I untick all the boxes, the median age of death goes to 0.