100 years? So what will be valuable/interesting to a future AI or other transhuman intelligence? Maybe well-packed genetic material from highly endangered species. Most of them will probably have been preserved elsewhere, but if you have even one hit on a truly lost species, that would probably be of some value/interest to our future children.
Of course. Just let me know if I can be of help.
The target for researchers to "be able to think unusually clearly" personally pushes towards the Bellingham location. That sort of semi-isolation in, for me, one of the most beautiful regions in the US is highly conducive to focused thought.
Although, I think it trades off with other potential goals, for example: community expansion or access to power. Those of you in miri know better where you are on timeline but research institute in the woods feels like it optimizes for a very particular move and may leave you less flexibility if the game isn't in the st...
Although, I think it trades off with other potential goals, for example: community expansion or access to power. Those of you in miri know better where you are on timeline but research institute in the woods feels like it optimizes for a very particular move and may leave you less flexibility if the game isn't in the state you imagine. As noted, I lack the information to know whether that's a good or bad bet.
Great point. I too lack the information to really say, but I would imagine that the endgame would be to ~100x the size of MIRI, and that when you'...
Given growth in both AI research and alignment research over the past 5 years, how do the rates of progress compare? Maybe separating absolute change, first and second derivatives.
Or, imagine if this were a service available to restaurants such that they could have an option on menu items: +$1 for ethically sourced eggs. Now, the service is transparent for them (maybe integrate with a payment provider willing to facilitate the network for free marketing) and they don't have to deal with buying two sets of eggs or taking supply risks.
Hm, starting to think there's a version of this that's viable.
I really like this idea of moral fungibility.
By cutting out the need for a separate "ethical" packaging, marketing, and distribution system, you vastly lower the costs for new entrants into the market. More, there would be additional benefits since you could cut other costs like the impact of long-range transportation (how do I weigh the environmental costs of shipping ethically sourced eggs from Maine to California vs buying local factory?).
I worry that consumers derive as much value from the act of buying the ethically sourced product as the actual reduction in harm, but maybe there's ways to market around that. Not sure, but it seems worth finding out :).
This is a huge problem area in NLP. Quite a few issues you raised, but just to pick 2:
- There are a large class of situations where the true model is just how a human would respond. For example, the answer to "Is this good art?" is only predictable with knowledge about the person answering (and the deictic 'this', but that's a slightly different question). In these cases, I'd argue that the true model inherently needs to model the respondent. There's a huge range, but even in the limit case where there is an absolute true answer (and the human is absolutely
... (read more)