After years of lurking in this community I finally broke down and Read the Sequences™. Specifically, I read the 2015 edition of Rationality: From AI to Zombies, cover to cover. Over the course of reading it, I became aware that it wasn't comprehensive. There would often be links going to...
I've found the AI Village amusing when I can catch glimpses of it, but I wasn't aware of a regular digest. Is https://theaidigest.org/village/blog what you are referring to?
These posts always leave me feeling a little melancholy that my life doesn't seem to have that many challenges where thinking faster/better/harder/sooner would actually help.
Most of my waking hours are spent on my job, where cognitive performance is not at all the bottleneck. (I honestly believe that if you made me 1.5x "better at thinking", this would not give a consistent boost in output as valued by the business. I'm a software engineer.) I have some intellectual spare-time hobbies, but the most demanding of them is Japanese studying, which is more about volume, exposure, and spaced repetition than clever strategies. I am intrigued by making myself more productive in my programming side... (read more)
It's interesting to compare this to the other curated posts I got in my inbox over the last week, What is malevolence? and How will we update about scheming. Both of those (especially the former) I bounced off of due to length. But this one I stuck with for quite a while, before I started skimming in the worksheet section.
I think the instinct to apply a length filter before sending a post to many peoples' inboxes is a good one. I just wish it were more consistently applied :)
Finding non-equilibrium quantum states would be evidence of pilot wave theory since they're only possible in a pilot wave theory.
If you can find non-equilibrium quantum states, they are distinguishable. https://en.m.wikipedia.org/wiki/Quantum_non-equilibrium
(Seems pretty unlikely we'd ever be able to definitively say a state was non-equilibrium instead of some other weirdness, though.)
I can help confirm that your blind assumption is false. Source: my undergrad research was with a couple of the people who have tried hardest, which led to me learning a lot about the problem. (Ward Struyve and Samuel Colin.) The problem goes back to Bell and has been the subject of a dedicated subfield of quantum foundations scholars ever since.
This many years distant, I can't give a fair summary of the actual state of things. But a possibly unfair summary based on vague recollections is: it seems like the kind of situation where specialists have something that kind of works, but people outside the field don't find it fully satisfying. (Even... (read more)
This is great, until Spotify is ready this will be the best way to share on social media.
May I suggest adding lyrics, either in the description or as closed captions or both?
If you are willing to share, can you say more about what got you into this line of investigation, and what you were hoping to get out of it?
For my part, I don't feel like I have many issues/baggage/trauma, so while some of the "fundamental debugging" techniques discussed around here (like IFS or meditation) seem kind of interesting, I don't feel too compelled to dive in. Whereas, techniques like TYCS or jhana meditation seem more intriguing, as potential "power ups" from a baseline-fine state.
So I'm wondering if your baseline is more like mine, and you ended up finding fundamental debugging valuable anyway.
It seems we have very different abilities to understand Holtman's work and find it intuitive. That's fair enough! Are you willing to at least engage with my minimal-time-investment challenge?
These are the most compelling-to-me quotes from "Simulators", saved for posterity.
Perhaps it shouldn’t be surprising if the form of the first visitation from mindspace mostly escaped a few years of theory conducted in absence of its object.
…when AI is all of a sudden writing viral blog posts, coding competitively, proving theorems, and passing the Turing test so hard that the interrogator sacrifices their career at Google to advocate for its personhood, a process is clearly underway whose limit we’d be foolish not to contemplate.
... (read 617 more words →)GPT-3 does not look much like an agent. It does not seem to have goals or preferences beyond completing text, for example. It is more like a chameleon that
After years of lurking in this community I finally broke down and Read the Sequences™. Specifically, I read the 2015 edition of Rationality: From AI to Zombies, cover to cover.
Over the course of reading it, I became aware that it wasn't comprehensive. There would often be links going to Eliezer essays on LessWrong that weren't included in the book. I read several of these and enjoyed them. So before I move on to start filling my reading time with other subjects, I'd like to complete my tour through the rationalist community canon.
Does anyone have a comprehensive list of the omitted essays? Or any highlights? Are any of the 2018 updates to AI to Zombies notable?
So far I'm planning to read the Fun Theory Sequence and trying to find more content on anthropic-adjacent topics (realityfluid, etc.) by following links from Timeless Identity.
Echoing what others have said here, this article was quite well-written. It felt well suited for people who do not know much about the field, with good analogies, recaps of foundational concepts, and links to various fun events not everyone will have caught (e.g. DeepThink switching to Chinese, or Golden Gate Claude). But it did that without being grating toward those of us who have been following along and for which much of this was review, which is especially impressive.
You might want to consider pitching this, or your future writing, around to larger outlets than LessWrong! I imagine your writing would be a perfect fit for detail-loving places like Quanta or Asterisk, but maybe larger online outlets (The Verge? Andandtech?? I don't know what people read these days) would be interested.