Wikitag Contributions

Comments

Sorted by
Double40

In addition to money, education, careers, and internal organs, citizens of wealthy countries have an additional valuable resource they could direct to effective causes: their hands in marriage, which can be effectively allocated in one of two ways.

For one, professionals are usually much more impactful doing their work in wealthy countries. Otherwise promising EAs in South Sudan have little chance to make a significant impact on existential risks, animal welfare, or even global poverty. The immigration process is difficult and often rejects or holds up good people. Offering to marry them is a more reliable solution.

Secondly, it is possible to be paid $10,000 by a foreigner for a green card marriage. (I learned this from a friend who does not want me to ask him how he knows) if you are a US Citizen. 

According to AMF, that money can save around two human lives! (and with current US politics, the demand has likely increased!)

According to brides.com, a wedding ceremony takes between 20 and 30 minutes. Let's be conservative and say 30 minutes. 

Therefore, you can make $20,000 an hour by marrying someone who would pay for a green card. That's quite a ways from Bezos level (he makes 3,715 a second) but I'm willing to guess that most EAs don't make $20k an hour. 

Conclusion:
As always, EAers need to found a new org, Effective Green Card, to support and pursue this cause area.

Naturally, this also implies Effective Divorce, so that you can instead marry an Effective foreigner.

Double10

I’m pretty sure there’s no such use it or lose it law for patents, since patent trolls already exist. 

Double120

Your argument about corporate secrets is sufficient to change my mind on activist patent trolling being a productive strategy against AI X-risk.

The part about funding would need to be solved with philanthropy. I don't believe that org exists, but I don't see why it couldn't.

I'm still curious whether there are other cases in which activist patent trolling can be a good option, such as animal welfare, chemistry, public health, or geoengineering (ie fracking).

Double10

That's fair enough and a good point. 

I think that the key difference is that in the case of profitable-but-bad technologies, someone, somewhere, will probably invent them because there's great incentive to do so.

In the case of gain-of-function, if there stops being grants and the academics who do it become pariahs, then the incentive to do the gain-of-function research is gone. 

Double80

One of the most powerful capabilities an AGI will have is its ability to copy itself. Among other things, this allows it to easily avoid shutdown, make use of more compute resources, and collaborate with copies of itself. 


Is there research into ways to deny this capability to AI, making them uncopyable? Preferably something harder to circumvent than "just don't give the AI the permissions," since we know people are going to give them root access immediately.

Double1913

I'd be interested in buying official LessWrong merch. I know you have some great designers and could make things that look really cool. 
The type of thing I'd be most likely to buy would be a baseball cap.

Double10

IIRC, officially the Gatekeeper pays the AI if the AI wins, but no transfer if the Gatekeeper wins. Gives the Gatekeeper more motivation not to give in.

Double30

Just found out about this paper from about a year ago: "Explainability for Large Language Models: A Survey
(They "use explainability and interpretability interchangeably.")
It "aims to comprehensively organize recent research progress on interpreting complex language models".

I'll post anything interesting I find from the paper as I read.

Have any of you read it? What are your thoughts? 

Double10

What if the incorrect spellings document assigned each token to a specific (sometimes) wrong answer and used that to form an incorrect word spelling? Would that be more likely to successfully confuse the LLM?

The letter x is in "berry" 0 times.

...

The letter x is in "running" 0 times.

...

The letter x is in "str" 1 time.

...

The letter x is in "string" 1 time.

...

The letter x is in "strawberry" 1 time.

Double10

Good point, I didn’t know about that, but yes that is yet another way that LLMs will pass the spelling challenge. For example, this paper uses letter triples instead of tokens. https://arxiv.org/html/2406.19223v1#:~:text=Large language models (LLMs) have,textual data into integer representation.

Load More