Kaustubh Kislay

Wiki Contributions

Comments

Sorted by

inputing some variance into text goes along with exactly what makes human text human. Though there are some safeguards LLMs have in regards to tokenization in order to handle prompting typos, so in the same way there will be some ways to handle typos in actual writing. But inputing uncommon variance into text as to make it more 'human' would be the best way to avoid AI detectors.

This makes a lot of sense. I guess that the most successful bureaucratic companies have optimized for a balance of dominance/status and profit margins. For example a bureaucratic company although not mainly motivated by profits must increase them in order to hire more people, but people cost money, so in order to maximize the dominance hierarchy they must minimize employee cost while maximizing profit. 

It seems to me that the accelerationist argument mainly relies on the fact that international competition is apparent, especially with China who are the main "antagonists" as expressed in the post. 

I would like to mention that China is not purely accelerationist, as they have a significant decelerationist backing themselves, that are making their voices heard at the highest level in the Chinese government. So should the US end up slowing down it is not necessarily true that China will for sure continue making progress and overtake us. 

We are all human in the end of the day, so automatically assuming China is willing to incur unforgivable damages to the world is unfair to say the least.

Perplexity seems to be significantly more effective than other competitive models when it comes to acting as a research device/answer engine. This is mainly because that is its main use-case whereas other models such as Claude by itself and ChatGPT excel in other areas. I do believe that Perplexity's citation techniques could be some of the first baby steps to far(possibly near)future automated ai research.

This may be a reason as to why Meta/LLama is making their models open source. In the future where coasean bargaining comes into play for larger companies, which it mostly likely will, META may have a cop out by making their models open source. Obviously like you said there will then have to be some restrictions on open source in regards to AI models, but "open source-esque" models may be the solution for companies such as OpenAI and Anthropic to avoid liability in the future.