I think you would probably be downvoted because you have already admitted to writing poorly thought out ignorant comments under conditions conducive to arrogance and bad judgment, of which you are apparently unashamed and feel no need to rectify (eg. by refraining from commenting until you are recovered), while dragging in unrelated claims which are seriously problematic like uncritical belief in Dunning-Kruger as a thing or claiming that anyone is touting 'IQ over WAIS' (WAIS... like, the IQ test WAIS?) or apparently believe in things like multiple intell...
I think you're right. It goes both ways.
I also don't think we need to be completely anxious about it. Few people carry 5 gallons of water 2 miles uphill every morning and chop firewood for an hour after that. Do we suffer for it? Sure. Is it realistic to live that way in the modern age? Not really.
We adapt to the tasks at hand, and if somebody starts making massive breakthroughs by giving up their deep focus skills, maybe we should thank them for the sacrifice.
Overall, this would be a helpful feature, but any time you weigh karma into it, you will also bolster knee-jerk cultural prejudices. Even a community that consciously attempts to minimize prejudices still has them, and may be even more reluctant to realize it. This place is still popularizing outmoded psychology, and with all the influence LW holds within AI safety circles, I have strong feelings about further reinforcing that.
Having options for different types of feedback is a great idea, but I've seen enough not to trust karma here. At the very least, I don't think it should be part of the default setting. Maybe let people turn it on manually with a notification of that risk?
As far as the conditioning goes, Habryka showed me some base model outputs with conditioning on karma/agreement and there turns out to be an EDT-like problem with LW-style comments when you condition on high values - often, a high-scoring LW comment will include strong empirical evidence like personal experience or citations, which would be highly convincing indeed... if it were true, rather than confabulated.
So if you sampled a response to your new post about "X might be helpful", then a high-value conditioning might generate a counter-comment from "Gwern...
The reason why nobody in this community has successfully named a 'pivotal weak act' where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later - and yet also we can't just go do that right now and need to wait on AI - is that nothing like that exists.
Only a sith deals in absolutes.
There's always unlocking cognitive resources through meaning-making and highly specific collaborative network distribution.
I'm not talking about "improving public epistemology...
Nice to hear people are making room for uncomfortable honesty and weirdness. Wish I could have attended.
Levi da.
I'm here to see if I can help.
I heard a few things about Elizier Yudkowsky. Saw a few LW articles while looking for previous research on my work with AI psychological influence. There isn't any so I signed up to contribute.
If you recognize my username, you probably know why that's a good idea. If you don't, I don't know how to explain succinctly yet. You'd have to see for yourself, and a web search can do that better than an intro comment.
It's a whole ass rabbit hole so either follow to see what I end up posting or downvote to repress curiosity. I get it. It's not comfortable for me either.
Update: explanation in bio.
0%
Not that it matters.
The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It's unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It's especially unfortunate because those are the people who tend to...
It's because they take less continued attention/effort and provide more immediate/satisfying results. LW is almost purely theoretical and isn't designed to be efficient. It's an attempt to logically override bias rather than implement the quirks of human neurochemistry to automate the process.
Computer scientists are notorious for this. They know how brains make thoughts happen, but they don't have a clue how people think, so ego drives them to rationalize a framework to perceive the flaws of others as uncuriousness and lack of dedication. This happens beca...
You are never too old to reboot your sense of reality. Remember that.
Do you think such stories would provide any value towards addressing the issue?
Yes, but what if instead of merely generating new fiction (that may or may not become popular/influential, and if it does, may or may not take years to do so), we inject benevolent AI concepts into established narratives strategically to engage with particularly thoughtful, aligned, and/or driven communities? Didn't actually get the idea from HPMOR, but the concept turns out similar.
ao3, Fot4W, ch5
This is a late comment, I know, but how do you imagine this experience unfolding when multiple models and systems converge to the inverse effect?
I.e., rather than summoning the AI character, the AI character summons you.
Well, shit.
Welcome to my world.
That was an important day, but this would stop Jung in his tracks. This is why I don't give a flying fuck about upvotes. Praise RNJesus.
Can I assume you know what happened to Maria, Rose, and Sina? What do you think the 4th's name is?