Wikitag Contributions

Comments

Sorted by
daijin10

I recently heard that thinking out loud is an important way for people to build trust (not just for LLMs) and this has helped me become more vocal. It has unfortunately not helped me become more correct, but I'm betting the tradeoff will be net positive in the long run.

Answer by daijin92

go find people who are better than you by a lot. one way to quickly do this is to join some sort of physical exercise class e.g. running, climbing etc. there will be lots of people who are better than you. you will feel smaller.

or you could read research papers. or watch a movie with real life actors who are really good at acting.

you will then figure out, as @Algon has mentioned in the comments, that the narcissism is load-bearing, and have to deal with that. which is a lot more scary

daijin10

game-theory-trust is built through expectation of reward from future cooperative scenarios. it is difficult to build this when you 'dont actually know who or how many people you might be talking to'.

daijin11

I did see the XKCD and I agree haha, I just thought your phrasing implied 'optimize everything (indiscriminately)'.

When I say caching I mean retaining intermediate results and tools if the cost to do so is near free.

daijin10

Nice. So something like grabbing a copy of swebench dataset, writing a pipeline that would solve those issues, then putting that on your CV?

I will say though that your value as an employee is not 'producing software' so much as solving business problems. How much conviction do you have that producing software marginally faster using AI will improve your value to your firm?

daijin50

so you want to build a library containing all human writings + an AI librarian.

  1. the 'simulated planet earth' is a bit extra and overkill. why not a plaintext chat interface e.g. what chatGPT is doing now?
  2. of those people who use chatgpt over real life libraries (of course not everyone), why don't they 'just consult the source material'? my hypothesis is that the source material is dense and there is a cost to extracting the desired material from the source material. your AI librarian does not solve this.

I think what we have right now ("LLM assistants that are to-the-point" and "libraries containing source text") serve distinct purposes and have distinct advantages and disadvantages.

LLM-assistants-that-are-to-the-point are great, but they

  • don't exist-in-the-world, therefore sometimes hallucinate or provide false-seeming facts; for example a statement like "K-Theanine is a rare form of theanine, structurally similar to L-Theanine, and is primarily found in tea leaves (Camellia sinensis)" is statistically probable (I pulled it out of GPT4 just now) but factually incorrect, since K-theanine does not exist.
  • don't exist in-the-world, leading to suboptimal retrieval. i.e. if you asked an AI assistant 'how do I slice vegetables' but your true question was 'im hungry i want food' the AI has no way of knowing that; and also the AI doesn't immediately know what vegetables you are slicing, thereby limiting utility

libraries containing source text partially solve the hallucination problem because human source text authors typically don't hallucinate. (except for every poorly written self-help book out there.)

from what I gather you are trying to solve the two problems above. great. but doubling down on 'the purity of full text' and wrapping some fake grass around it is not the solution.

 

 

here is my solution

  • atomize texts into conditional contextually-absolute statements and then run retrieveal on these statements. For example, "You should not eat cheese" becomes "eating excessive amounts of typically processed cheese over the long run may lead to excess sodium and fat intake".
  • help AI assistants come into the world, while maintaining privacy
daijin10

Another consequence of this is that inviting your friend to zendo is not weird, but inviting all your friends publically to zendo is.

daijin1-1

'Weirdness' is not about being other from the group, it is about causing the ingroup pain, which happens to correlate to being distinct from the ingroup (weird). We should call them ingroup-pain-points.

Being loudly vegan is spending ingroup-pain-points, because being in front of someone's face and criticising their behaviour causes them pain. Serving your friends tasty vegan food does not cause them pain and therefore incurs no ingroup-pain-points.

There is a third class of ingroup pain point that i will call 'cultural pain point'. My working definition of 'culture' is 'suboptimal behaviours that signal ingroup membership'. If you refuse to partake in suboptimal behavior, this does not cause you pain, but since you are now in a better position than others in the ingroup, you have now caused them pain. This is why you can be vilified for being vegan in certain 'cultures': you are being more optimal (healthier) relative to other people in a way that is (implicitly or explicitly) identified as a signalling-suboptimal-behaviour.

daijin10

'If some 3rd party brings that bird home to my boss instead of me, I'm going to be unwealthy and unemployed.'

Have you talked to your boss about this? I have, for me the answer was some combination of

"Oh but using AI would leak our code"

"AI is a net loss to productivity because it errors too much / has context length limitations / doesn't care for our standards"

And that is not solvable by a third party, so my job is safe. What about you?

daijin10

I recall a solution to the outer alignment problem as 'minimise the amount of options you deny to other agents in the world', which is a more tractable version of 'mimimise net long term changes to the world'. There is an article explaining this somewhere.

Load More