Was this not originally tagged “personal blog”?
I’m not sure what the consensus is on how to vote on these posts, but I’m sad that this post’s poor reception might be why its author deactivated their account.
I just reported this to Feedly.
Thanks for the info! And no worries about the (very) late response – I like that people fairly often reply at all (beyond same-day or within a few days) on this site; makes the discussions feel more 'timeless' to me.
The second "question" wasn't a question, but it was due to not knowing that Conservative Judaism is distinct from Orthodox Judaism. (Sadly, capitalization is only relatively weak evidence of 'proper-nounitude'.)
Some of my own intuitions about this:
I think my question is different, tho that does seem like a promising avenue to investigate – thanks!
That's an interesting idea!
An oscilloscope
I guessed that's what you meant but was curious whether I was right!
If the AI isn't willing or able to fold itself up into something that can be run entirely on single, human-inspectable CPU in an airgapped box, running code that is amenable to easily proving things about its behavior, you can just not cooperate with it, or not do whatever else you were planning to do by proving something about it, and just shut it off instead.
Any idea how a 'folded-up' AI would imply anything in particular about the 'expanded' AI?
If an AI 'folded its...
What source code and what machine code is actually being executed on some particular substrate is an empirical fact about the world, so in general, an AI (or a human) might learn it the way we learn any other fact - by making inferences from observations of the world.
This is a good point.
But I'm trying to develop some detailed intuitions about how this would or could work, in particular what practical difficulties there are and how they could be overcome.
...For example, maybe you hook up a debugger or a waveform reader to the AI's CPU to get a memory dum
This is a nice summary!
fictional role-playing server
As opposed to all of the non-fictional role playing servers (e.g. this one)?
I don't think most/many (or maybe any) of the stories/posts/threads on the Glowfic site are 'RPG stories', let alone some kind of 'play by forum post' histories, there's just a few that use the same settings as RPGs.
I suspect a lot of people, like myself, learn "content-based writing" by trying to communicate, e.g. in their 'personal life' or at work. I don't think I learned anything significant by writing in my own "higher forms of ['official'] education".
I would still like to see political pressure for truly open independent audits, though.
I think that would be a big improvement. I also think ARC is, at least effectively, working on that or towards it.
Damning allegations; but I expect this forum to respond with minimization and denial.
This is so spectacularly bad faith that it makes me think the reason you posted this is pretty purely malicious.
Out of all of the LessWrong and 'rationalist' "communities" that have existed, how many are ones for which any of the alleged bad acts occurred? One? Two?
Out of all of the LessWrong users and 'rationalists', how many have been accused of these alleged bad acts? Mostly one or two?
My having observed extremely similar dynamics about, e.g. sexual harassment, in se...
Please don't pin the actions of others on me!
No, it's not, especially given that 'whataboutism' is a label used to dismiss comparisons that don't advance particular arguments.
Writing the words "what about" does not invalidate any and all comparisons.
I think the quoted text is inflammatory and "this forum" (this site) isn't the same as wherever the alleged bad behavior took place.
Is contradicting something you believe to be, essentially, false equivalent to "denial"?
It is anomalous that people are quite uninterested in optimizing this as it seems clearly important.
I have the opposite sense. Many people seem very interested in this.
"This community" is a nebulous thing and this site is very different than any of the 'in-person communities'.
But I don't think there's strong evidence that the 'communities' don't already "have much lower than average levels of abuse". I have an impression that, among the very-interested-in-this people, any abuse is too much.
What kind of more severe punishment should "the rationalist community" mete out to X and how exactly would/should that work?
You seem to be describing something that's so implausible it might as well be impossible.
Given the existing constraints, I think ARC made the right choice.
Do you think ARC should have traded publicizing the lab's demands for non-disclosure instead of performing the exercise they did?
I think that would have been a bad trade.
I also don't think there's much value to them whistleblowing about any kind of non-disclosure that the lab's might have demanded. I don't get the sense there's any additional bad (or awful) behavior – beyond what's (implicitly) apparent from the detailed info ARC has already publicly released.
I think it's very useful to maintain sufficient incentives for the lab's to want to allow things l...
Wouldn't it be better to accept contractual bindings and then at least have the opportunity to whistleblow (even if that means accepting the legal consequences)?
Or do you think that they have some kind of leverage by which the labs would agree to NOT contractually bind them? I'd expect the labs to just not allow them to evaluate the model at all were ARC to insist on or demand this.
I'm definitely not against reading your (and anyone else's) blog posts, but it would be friendlier to at least outline or excerpt some of the post here too.
It looks like you didn't (and maybe can't) enter the ASCII art in the form Bing needs to "decode" it? For one, I'd expect line breaks, both after and before the code block tags and also between each 'line' of the art.
If you can, try entering new lines with <kbd>Shift</kbd>+<kbd>Enter</kbd>. That should allow new lines without being interpreted as 'send message'.
I really like David's writing generally but this 'book' is particularly strong (and pertinent to us here on this site).
The second section, What is the Scary kind of AI?, is a very interesting and (I think) useful alternative perspective on the risks that 'AI safety' do and (arguably) should focus on, e.g. "diverse forms of agency".
The first chapter of the third ('scenarios') section, At war with the machines, provides a (more) compelling version of a somewhat common argument, i.e. 'AI is (already) out to get us'.
The second detailed scenario, in the third c...
This seems like the right trope:
That's why I used a fatal scenario, because it very obviously cuts all future utility to zero
I don't understand why you think a decision resulting in some person's or agent's death "cuts all future utility to zero". Why do you think choosing one's death is always a mistake?
I think I'd opt to quote the original title in a post here to indicate that it's not a 'claim' being made (by me).
IIRC, RDIs (and I would guess EARs) vary quite significantly among the various organizations that calculate/estimate/publish them. That might be related to the point ChristianKI seemed to be trying to make. (Tho I don't know whether 'iron' is one of the nutrients for which this is, or was, the case.)
I can't tell what's the output of ChatGPT or your prompts or commentary.
I don't think 'chronic fatigue syndrome' is a great example of what the post discusses because 'syndrome' is a clear technical (e.g. medical) word already. Similarly, 'myalgic encephalitis' is (for most listeners or readers) not a phrase made up of common English words. Both examples seem much more clearly medical or technical terms. 'chronic fatigue' would be a better example (if it was widely used) as it would conflate the unexplained medical condition with anything else that might have the same effects (like 'chronic overexertion').
The only benefit of public schools anymore, from what I can tell, is that very wise and patient parents can use it to support their children in mastering Defense Against the Dark Arts.
Well, that and getting to play with other kids. Which is still pretty cool.
This may be, perhaps, an under-appreciated function of (public) school schooling!
I would think the title is itself a content warning.
I guess someone might think this post is or could be far more abstract and less detailed about the visceral realities than it is (or maybe even just using the topic as a metaphor at most).
What kind of specific content warning do you think would be appropriate? Maybe "Describes the dissection of human bodies in vivid concrete terms."?
I was going to share it with you if you didn't have it, but thanks!
Has anyone shared the link with you yet?
After a long day of work, you can kick back with projectlawful for a few hours, and then go to sleep. You can read projectlawful on the weekend. You can read projectlawful on vacation. It's rest and rejuvenation and recharging ...
I did NOT find this to be the case – I found it way TOO engaging and that it therefore, e.g. actively disrupted my ability to go to sleep. I also found the story to be extremely upsetting, i.e. NOT restful or rejuvenating. As-of now, it's extremely bleak.
I very much DO like it and I am perfectly happy that it's a glowfic. (Ther...
I think 'againstness' is nearly perfect :)
I didn't think anything was confusing!
'Againstness' felt like a nearly self-defining word to me.
Your course had a rough/sketched/outlined model based on other models at various levels and there's a few example techniques based on it (in the course).
"againstness control" is totally sensible – just like, e.g. 'againstness management' and 'againstness practice', are too.
I think there's an implied (and intriguing) element of using SNS arousal/dominance for, e.g. motivation. I think there are some times or circumstances...
I think I'm missing a LOT of context you have about this. I very well could be – probably am – missing some point, but I also feel like you're discouraging me from voicing anything that doesn't assume whatever your point is. Is it just that "Stephen Wolfram is bad and everyone should ignore him."? I honestly tried to investigate this, however poorly I might have done that, but this comment comes across as pretty hostile. Is it your intention to dissuade me from writing about this at all?
...They bring it up because it is a shocking violation of norms, even c
I now think it is plausible that Wolfram sued "over literary conventions":
I suspect that Wolfram just wanted to reveal the relevant proof himself, first, in his book NKS (A New Kind of Science), and that Matthew Cook probably was contractually obligated to allow Wolfram to do that.
Given that the two parties settled, and that Cook published his paper about his proof in Wolfram's own journal (Complex Systems), two years after NKS was published, seems to mostly confirm my suspicions.
The 'components' of our diet, e.g. meat, potatoes, etc., are very different now than earlier, and more different over the last 100 years than prior periods too.
I suspect people that are doing diets like this tho are much less obese, e.g. the Amish.
I've weirdly been less and less bothered since my previous comment! :)
I think "planecrash" is a better overall title still, so thanks for renaming all of the links.
Huh – I wonder if this has helped me since I made a concerted effort to eat leafy greens regularly (basically every day).
I always liked the 'fact' that celery has net-negative calories :)
I do also lean towards eating fruit raw versus, e.g. blended in a smoothy. Make-work for my gastrointestinal system!
I think you're making an unsupported inferential leap in concluding "they seem oddly uninterested in ...".
I would not expect to know why they haven't responded to my comments, even if I did bring up a good point – as you definitely have.
I don't know, e.g. what their plans are, whether they even are the kind of blogger that edits posts versus write new follow-up posts instead, how much free time they have, whether they interpreted a comment as being hostile and thus haven't replied, etc..
You make good points. But I would be scared if you 'came after me' as you seem to be doing to the SMTM authors!
It just seems to me that the SMTM authors are doing a very bad job at actually pursuing the truth
I think – personally – you're holding them to an unrealistically high standard!
When I compare SMTM to the/a modal person or even a modal 'rationalist', I think they're doing a fantastic job.
Please consider being at least a little more charitable and, e.g. 'leaving people a line of retreat'.
We want to encourage each other to be better, NOT to discourage them from trying at all! :)
I was, and still am, tho much less, excited about the contamination theory – much easier to fix!
But I think I'm back to thinking basically along the lines you outlined.
I'm currently losing weight and my model of why is:
I also thought it was (plausibly) a 'friendly challenge' – we should be willing to bet on our beliefs!
And we should be willing to bet and also trust each other to not defect from our common good.
The challenge did specify [emphasis mine]:
up to $1000
I think they're a proponent of the 'too palatable food' theory.
Thanks!
I've definitely downgraded the (lithium) contamination theory. I'll still take a (very modest) 100:1 bet on it tho :)
In regard to your (implied) criticism that SMTM's blog post(s) haven't been edited, it occurred to me that they may not be a 'edit blog posts' person. That seems related to their offered reasons for refusing bet challenge, i.e. 'we're in hypothesis exploration mode'. They might similarly be intending to write a follow-up blog post instead of editing the existing one.
(I actually prefer 'new post' versus 'edit existing post' as a blogging/writing policy – if there isn't a very nice (e.g. 'GitHub like') history diff visualization available of the edit history.)
I admit now that I was in fact missing the point.
I can (maybe/kinda) imagine someone else doing something like this and not definitely thinking it was wholly unjustified, but I agree now that this a damning part of a larger damning (and long enduring) pattern of bad behavior on Wolfram's part.
You were right. I was wrong.