LESSWRONG
LW

tailcalled
7813Ω7610623900
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Linear Diffusion of Sparse Lognormals: Causal Inference Against Scientism
6tailcalled's Shortform
4y
264
I'm scared.
tailcalled1d20

I did well by taking care of myself and trying to help things progress.

Reply
Problematic Professors
tailcalled4d20

Issue with judging the practitioners is that practicing it may be correlated with other things that are much more harmful. Like all the talk about how single parenthood is supposedly bad for you, but then it doesn't hold up to more careful scrutiny afaik.

Reply
tailcalled's Shortform
[+]tailcalled7d-6-9
tailcalled's Shortform
[+]tailcalled7d-23-26
tailcalled's Shortform
[+]tailcalled10d-160
Lurking in the Noise
tailcalled10d814

Pathogenic bacteria aren't actually optimized to kill you, because they need to exploit you. The pathogenic element occurs in order to get you to spread the bacterion to other hosts.

Reply
Racial Dating Preferences and Sexual Racism
[+]tailcalled12d-200
tailcalled's Shortform
tailcalled14d20

No it doesn't. I obviously understood my old posts (and still do - the posts make sense if I imagine ignoring LDSL). So I'm capable of understanding whether I've found something that reveals problems in them. It's possible I'm communicating LDSL poorly, or that you are too ignorant to understand it, or that I'm overestimating how broadly it applies, but those are far more realistic than that I've become a pure crank. If you still prefer my old posts to my new posts, then I must know something relevant you don't know.

Reply11
tailcalled's Shortform
tailcalled15d20

A lot of my new writing is as a result of the conclusions of or in response to my old research ideas.

Reply
tailcalled's Shortform
tailcalled15d-20

Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow "input/output pair" problems. Yet now you say

The relevance of zooming in on particular input/output problems is part of my model.

If I treat your theory this way, it is only because you did, 3 hours ago, when you believed I hadn't read your post or would even give you the time of the day. You claimed "How do we interpret the inner-workings of neural networks." was "not a puzzle unless you get [a?] more concrete application of it", yet the examples you list in your first post are no more vague, and often quite a bit more vague than "how do you interpret neural networks?" or "why are adversarial examples so easy to find?" For example, the question "Why are people so insistent about outliers?" or "Why isn’t factor analysis considered the main research tool?"

"Why are adversarial eamples so easy to find?" is a problem that is easily solvable without my model. You can't solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.

"Why are people so insistent about outliers?" is not vague at all! It's a pretty specific phenomenon that one person mentions a general theory and then another person says it can't be true because of their uncle or whatever. The phrasing in the heading might be vague because headings are brief, but I go into more detail about it in the post, even linking to a person who frequently struggles with that exact dynamic.

As an aside, you seem to be trying to probe me for inconsistencies and contradictions, presumably because you've written me off as a crank. But I don't respect you and I'm not trying to come off as credible to you (really I'm slightly trying to come off as non-credible to you because your level of competence is too low for this theory to be relevant/good for you). And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven't just abandoned the conversation.

For... what exactly? For theories of everything? Oh I assure you, there is quite a bit of competition there. For statistical modeling toolkits? Ditto. What exactly do you think the unique niche you are trying to fill is? You must be arguing against someone, and indeed you often do argue against many.

Theories of everything that explain why intelligence can't model everything and you need other abilities.

Reply
Load More
No wikitag contributions to display.
-12Against Infrabayesianism
9h
0
31Knocking Down My AI Optimist Strawman
5mo
3
12My Mental Model of AI Optimist Opinions
5mo
7
23Evolution's selection target depends on your weighting
7mo
22
43Empathy/Systemizing Quotient is a poor/biased model for the autism/sex link
8mo
0
12Binary encoding as a simple explicit construction for superposition
9mo
0
11Rationalist Gnosticism
9mo
10
32RLHF is the worst possible thing done when facing the alignment problem
10mo
10
10Does life actually locally *increase* entropy?
Q
10mo
Q
27
21Why I'm bearish on mechanistic interpretability: the shards are not in the network
10mo
40
Load More