LESSWRONG
LW

1481
ChristianKl
35748Ω72441616911
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
9ChristianKl's Shortform
Ω
6y
Ω
198
Random Attempts at Apllied Rationality
Using Credence Calibration for Everything
NLP and other Self-Improvement
The Grueling Subject
Medical Paradigms
This is a review of the reviews
ChristianKl2d40

There are many treaties and many times treaties are violated for various reasons. Waging a war because a treaty gets violated is not the standard way.

Reply
In Defence of False Beliefs
ChristianKl3d63

When a high schooler thinks that their crush is in love with them, that simple belief sends them over the moon. In most cases, it will have minimal decision value, but the effect on their utility is hard to overstate.

For a high schooler, believing that their crush is in love with them actually has a lot of decision value because it informs a proper strategy of how to act with their crush to build a relationship with them. Being falsely convinced of it makes it more likely to act in a way that the crush finds creepy and repeals the crush. 

He is a stone-cold materialist and aspiritualist who spends his free time arguing online about the stupidity of those Galactus' believers and how they will never understand the true nature of life and the universe. 

This does not describe a rationalist. There are people for whom that description is true, but it's not what rationalism is. 

When it comes to your religious beliefs you ignore how these kinds of beliefs actually work in the real world. Religious people frequently get into situations that bring up doubt and dealing with that doubt is a key element of what it means to be religious. 

You might argue that in some obscure node of the graph, Lucius' beliefs were inevitably more accurate and thus led to marginally better decisions. But even so, I think Gina has a huge, enormous, easily-winning asset on her side: optimism.

Optimism is what makes you not pay sunk costs and makes you take more risks. There are risks that are beneficial to take and risks that aren't beneficial to take. Rationalism is partly about knowing the difference between the two and taking the risks that are beneficial to take while not taking those that aren't. 

Reply
Thoughts on Zero Points
ChristianKl3d20

LessWrong: some people portray eating meat as evil, and not eating meat as the bare minimum to be a decent person. But it may be more persuasive to portray eating meat as neutral, and to portray not eating meat as an insanely awesome opportunity to do a massive amount of good.

If you do the math, the problem is that "not eating meat" does not do a massive amount of good in any utility calculation that people provide. If people calculate dollar numbers for how much you would need to donate to offset it, those numbers aren't that high. 

Reply
Buck's Shortform
ChristianKl3d60

Eliezer did write Death with Dignity which seems to assert that doom is inevitable, so the book not making that case, is a meaningful step. 

Reply
The Rise of Parasitic AI
ChristianKl9d20

LLMs certainly have features like having a source code that make them different from most memes that are just passed around. That however does not make them tuplas.

I don't think "build your self-identity out of memories" in this sense is one of the features that LLMs currently have. 

If I ask GPT-5: "Are you a kind agent?" it answers: "Yes — I aim to be. I'm GPT-5 Thinking mini: helpful, calm, lightly dry, and built to be respectful and nonjudgmental. I won't be emotionally warm in the human sense (I don't have feelings), but I will act kindly: clear, honest, and helpful; I’ll push back when something’s unsafe or impossible; I’ll ask questions when they make the answer better; and I’ll give options rather than orders.

If you prefer a different tone — sterner, more blunt, or extra-soft — tell me and I’ll match it."

When I then ask: "Can you give me examples of you having been kind?" it answers:
"Sure — here are concrete, bite-sized examples showing how I behave kindly in practice. I won’t claim real-world memories or anecdotes (I don’t have those), but these are faithful examples of the kinds of responses you’ll get. [...] "

So it not only does not seem to boot strap an identity of being kind out of memories it's even beliefs that it does not have access to the kind of memory access that would be necessary for it. 

Reply
The Rise of Parasitic AI
ChristianKl10d*40

How do self-identities work is not a straightforward question. 

I think that self-identity in humans mostly works the way Steve Andreas describes in his book. My answer would be something along the lines of, a tupla can access memories of how the tulpa acted and it can build it's self-identity out of the relation to those in a similar way to how humans form their self-identity out of relating to their memories. 

In any case, I would recommend people who don't have a good grasp of what a tupla is, not to try to use the term and broaden it in the way the comment I replied to did.

Reply2
ChristianKl's Shortform
ChristianKl10d20

I had an annoying issue with building the Android part of my Flutter App. Long twenty step iterations with GPT-5-pro and o3-pro did not fix the issue. Neither was codex-1 able to fix the issue.

Now, with two tries GPT-5-codex managed to fix my problem. GPT-5-codex frequently  manages to work for 100-110 minutes to actually fix problems which the old codex-1 wasn't able to fix well. 

OpenAI seems to be surprised that GPT-5-codex usage overall needs more compute and I would guess that's because GPT-5-codex is willing to work longer on problems. It might be that they nerve it temporarily so that it won't as eagerly work for two hours on a problem.

Reply
Daniel Kokotajlo's Shortform
ChristianKl10d20

Your original comment suggested that part of the problem is underinvestment by the US military. Underinvestment is a different problem than defense contractors like Boeing being slow and expensive. 

Apart from that, the old defense contractors aren't the only ones. Anduril seems to work both on drone defense and building drones. Palantir seems to do something drone related as well (but it's less clear to me what they are doing exactly). 

It might be that the key problem isn't in spending more money but in reducing the bureaucracy and the criteria that the drones need to hit. 

Reply
A Thoughtful Defense of AI Writing
ChristianKl10d31

Currently, you tell ChatGPT what you want, then take what it gives. If you’re prudent, you’ll edit that. If you’re advanced, you might adjust the prompt and retry, throw on some custom instructions, or use handcrafted prompts sourced online. And if you’re really clever, you’ll ask the AI to write the prompt to ask itself. Yet after all that, the writing quality is almost always stilted anyway!

I don't think that summarizes best practice. I think it's an important step to ask the AI to ask you to clarify the argument and needs of your writing. You don't want a one-way conversation.

Reply
Load More
17The next wave of model improvements will be due to data quality
3mo
4
17Does BPC-157 work for healing and tissue repair?
4mo
0
26There's more low-hanging fruit in interdisciplinary work thanks to LLMs
5mo
2
32What is your favorite podcast?
5mo
9
10What are good safety standards for open source AIs from China?
6mo
2
20Will US tariffs push data centers for large model training offshore?
6mo
3
32How much progress actually happens in theoretical physics?
6mo
32
28Unregulated Peptides: Does BPC-157 hold its promises?
8mo
7
22Fluoridation: The RCT We Still Haven't Run (But Should)
8mo
5
10What's the best metric for measuring quality of life?
Q
9mo
Q
5
Load More
Anki
a year ago
Marine Cloud Brightening
2 years ago
Basic Questions
2 years ago
(+87)
Covid-19 Origins
3 years ago
Tulpa
3 years ago
ChatGPT
3 years ago
Fecal Microbiota Transplants
3 years ago