Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
justin10

This is a huge problem area in NLP. Quite a few issues you raised, but just to pick 2:

  1. There are a large class of situations where the true model is just how a human would respond. For example, the answer to "Is this good art?" is only predictable with knowledge about the person answering (and the deictic 'this', but that's a slightly different question). In these cases, I'd argue that the true model inherently needs to model the respondent. There's a huge range, but even in the limit case where there is an absolute true answer (and the human is absolutely wrong), modeling that seems valuable for any AI that has to interact with humans. In any case, one slightly older link to give an example of the literature here: https://www.aclweb.org/anthology/P15-1073/
  2. There's a much larger literature around resolving issues of inter-rater reliability which may be of interest. Collected data is almost always noisy and research is extensive on evaluating and approaching that. Given the thrust of your article here, one that may be of more interest to you is active learning where the system evaluates its own uncertainty and actively requests examples to help improve its model. Another older example from which you can trace newer work: https://dl.acm.org/doi/10.1109/ACII.2015.7344553
justin10

100 years? So what will be valuable/interesting to a future AI or other transhuman intelligence? Maybe well-packed genetic material from highly endangered species. Most of them will probably have been preserved elsewhere, but if you have even one hit on a truly lost species, that would probably be of some value/interest to our future children.

justin30

Of course. Just let me know if I can be of help.

justin400

The target for researchers to "be able to think unusually clearly" personally pushes towards the Bellingham location. That sort of semi-isolation in, for me, one of the most beautiful regions in the US is highly conducive to focused thought.

Although, I think it trades off with other potential goals, for example: community expansion or access to power. Those of you in miri know better where you are on timeline but research institute in the woods feels like it optimizes for a very particular move and may leave you less flexibility if the game isn't in the state you imagine.  As noted, I lack the information to know whether that's a good or bad bet.

If you prefer a more flexible approach, I'd consider one of the new or old tech hubs: Austin being an example of the former, Boston the latter. Both seem in some ways to be more future oriented than the bay which sadly feels like it's being consumed by a hustler/MBA ethos rather than a creative one. Also, perhaps consider hubs focused around next-gen industries such as biotech (Boston again or maybe San Diego) as there's just a difference in cultural dynamism as opposed to locations very much in exploit mode.

Finally, if you're still considering moving the community en masse, as discussed in the earlier posts, the only communities I can think of that have successfully done that have not left by choice. While in two minutes of thought I haven't come up with a way to drive the rationalists from the bay without physical risk or legal jeopardy, that does not mean it can't be done. Short of that, schisms can do wonders for motivation.

justin130

Given growth in both AI research and alignment research over the past 5 years, how do the rates of progress compare? Maybe separating absolute change, first and second derivatives.

justin50

Or, imagine if this were a service available to restaurants such that they could have an option on menu items: +$1 for ethically sourced eggs. Now, the service is transparent for them (maybe integrate with a payment provider willing to facilitate the network for free marketing) and they don't have to deal with buying two sets of eggs or taking supply risks.

Hm, starting to think there's a version of this that's viable.

justin60

I really like this idea of moral fungibility.

By cutting out the need for a separate "ethical" packaging, marketing, and distribution system, you vastly lower the costs for new entrants into the market. More, there would be additional benefits since you could cut other costs like the impact of long-range transportation (how do I weigh the environmental costs of shipping ethically sourced eggs from Maine to California vs buying local factory?).

I worry that consumers derive as much value from the act of buying the ethically sourced product as the actual reduction in harm, but maybe there's ways to market around that. Not sure, but it seems worth finding out :).