Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Pasero66

Consciousness might not matter for alignment, but it certainly matters for moral patienthood.

Pasero10

Presumably many people are already hard at work trying to undo what safety precautions were instilled into Llama-2, and to use various techniques to have it do everything you are imagining not wanting it to do

There are now easily available "uncensored" versions of Llama-2. I imagine the high false refusal rate is going to increase the use of these among non-malicious users. It seems highly likely that, in the context of open source LLMs, overly strict safety measures could actually decrease overall safety.

Pasero20

One last thing I didn’t consider at all until I saw it is that you can use the plug-ins with other LLMs? As in you can do it with Claude or Llama (code link for Claude, discussion for Llama)


A growing number of open source tools to fine-tune LLaMA and integrate LLMs with other software make this an increasingly viable option. 

Pasero10

I'd recommend using a mask with an exhalation valve, if you can.

Excellent write-up. Though, I think it's important to note that, if unfiltered, exhalation values can potentially spread disease.

[This comment is no longer endorsed by its author]Reply
Pasero20

I found the last bit very compelling. David Deutsch's The Beginning of Infinity is the only place where I have seen a similar sort of anthropocentrism regarding the future discussed. Could anyone point me in the right direction to more on the topic (or keywords I should be searching for)?