I've got an idea what meditation people might be talking about with doing away with the self. Once you start thinking about what the lower-level mechanics of the brain are like, you start thinking about representations. Instead of the straightforward assertion "there's a red apple on that table", you might start thinking "my brain is holding a phenomenal representation of a red apple on a table". You'll still assume there's probably a real apple out there in the world too, though if you're meditating you might specifically try to not assign meanings to phe...
Lewis Dartnell's The Knowledge - How to Rebuild Our World From Scratch is a sort of grand tour for technological underpinnings of industrial civilization and how you might bootstrap them. Might be a bit dry, but it's popular writing and if the kid's already reading encyclopedias it should fit right in. Lots of concrete details about specific technologies.
Might go for a left field option and see what he makes of Euclid's Elements.
I haven't tried galantamine, but didn't find the drugless techniques all the same. The standard advice of keeping a dream diary and psyching yourself to have a lucid dream and to do reality checks never worked at all for me. Wake-back-to-bed on the other hand got me dozens of lucid dreams and often worked the first time I tried it after a break. It's also annoying to do because it involves messing with your sleep cycle and waking yourself up in the early morning, and it seems to always stop working if I try to do it multiple nights in a row.
Agree with the ...
Everyone who participates probably isn't a github-using programmer, but if they were, a stupid five-minute solution might be to just set up a private github project and use its issue tracker for forum threads.
I had the same problem, then I started mixing cottage cheese in the oatmeal and that fixed it.
Back when I read about people claiming a RepRap can reproduce itself, I felt like the claim implied it would build the electronics of the new RepRap from scratch as well and was confused since obviously a 3D printer can't double as a chip fab. The gold standard for a self-replicating machine for me is something like plants, which can turn high-entropy raw materials like soil and ores into itself given a source of energy. I guess you could talk about autotrophic self-reproducing machines that can do their thing given a barren planet and sunlight, and hetero...
Great post. I've been trying to find SF reviews that aren't just blurbs to get an idea about what's going on with the scene currently. With the exception of Tchaikovsky, most authors whose names keep popping up seem to still be ones who started publishing back in the 20th century. Unfortunately, I already know about most of the books on this list. So I'm going to write a wishlist of books I've heard of but don't know that much about and would like to see reviews of,
James Gleick's Genius cites a transcript of "Address to Far Rockaway High School" from 1965 (or 1966 according to this from California Institute of Technology archives for Feynman talking about how he got a not-exceptionally-high 125 for his IQ score. Couldn't find an online version of the transcript anywhere with a quick search.
I've stopped trying to make myself do things I don't want to do. Burned out at work, quit my job, became long-term unemployed. The world is going off-kilter, the horizons for comprehensible futures are shrinking, and I don't see any grand individual-scale quest to claw your way from the damned into the elect.
How many users you can point to who started out making posts that regularly got downvoted to negative karma and later became good contributors? Or, alternatively, specific ideas that were initially only presented by users who got regularly downvoted that were later recognized as correct and valuable? My starting assumption is that it's basically wishful thinking that this would happen much under any community circumstances, people who write badly will mostly keep writing badly and people who end up writing outstanding stuff mostly start out writing better than average stuff.
Please do not vote without an explanatory comment (votes are convenient for moderators, but are poor intellectual etiquette, sans information that would permit the “updating” of beliefs).
This post has terrible writing style, based on your posting history you've been here for a year, writing similarly badly styled posts, people have commented on the style, and you have neither engaged the comments nor tried to improve your writing style. Why shouldn't people just downvote and move on at this point?
Is this your first time running into Zack's stuff? You sound like you're talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack's post is about his actual engagement with the community over a decade, we've seen a bunch of the previous engagement (in pretty much the register we see here so this doesn't look like an ongoing psychotic break), he's responsive to comments and his thesis generally makes sense. This isn't drive-by crackpottery and it's on LessWrong because it's about LessWrong.
I agree that Zack has a long history of engagement with the rationalist community, and that this post is a continuation of that history (in a predictable direction).
But that doesn't necessarily make this engagement sane.
From my perspective, Zack has a long-term obsession, and also he is smart enough to be popular on LessWrong despite the fact that practically everything he says is somehow connected to this obsession (and if for a moment it seems like it is not, that's just because he is preparing some convoluted meta argument that will later be used to sup...
Record-keeping isn't enough to make you a scientist. People might be making careful records and then analyzing them badly, and if there's no actual effect going on selection effect will leave you with a community of misanalyzers.
The PDF is shown in full for me when I scroll down the academia.edu page, here's an archive.is capture in case this is some sort of intermittent A/B testing thing.
There might not be, but it's not a thing in vacuum, it was coined with political intent and it's tangled with that intent.
Blithely adopting a term that seems to have been coined just for the purposes of doing a smear job makes you look like either a useful idiot or an enemy agent.
The post reads like a half-assed college essay where you're going through the motions of writing without things really coming together. Heavy on the structure, there's no clear thread of rhetoric progressing through it, and it's hard to get a clear sense where you're coming from with the whole thing. The overall impression is just list of disjointed arguments, essay over.
I've been gaming some 35 years and I don't play any multiplayer games at all. I don't think I remember the ten or so people in my social hangouts who regularly talk about what they're playing talk much about PVP either, they seem to be playing single-player simulator, grand strategy and CRPG games or cooperative multiplayer games mostly.
All else being equal, do you prefer to live in a society where many members are madmen and idiots or in a society where few members are madmen and idiots?
"It can't happen and it would also be bad if it happened" seems to be a somewhat tempting way to argue these topics. When trying to convince an audience that thinks "it probably can happen and we want to make it happen in a way that gets it right", it seems much worse than sticking strictly to either "it can't happen" or "we don't know how to get it right for us if it happens". When you switch to talking about how it would be bad, you come off as scared and lying about the part where you assert it is impossible. It has the same feel as an 18th century theo...
Ted Kaczynski
This sounds drastic enough that it makes me wonder, since the claimed reason was that Said's commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it's permanent, it has a very rubber room vibe to it, where you don't outright ban someone but continually humiliate them if they keep coming by and wish they'll eventually get the hint.
(That person is more responsible than any other single individual for Eliezer not being around much these days.)
Wait, the only thing I remember Said and Eliezer arguing about was Eliezer's glowfic. Eliezer dropped out of LW over an argument about how he was writing about tabletop RPG rules in his fanfiction?
There are already social security means-testing regimes that prod able-bodied applicants to apply for jobs and to spend their existing savings before granting them payments. If sex work and organ sales are fully normalized, these might get extended into denying social security payments until people have tried to support themselves by selling a kidney and doing sex work.
The shift we're looking at is going from program code that's very close to a computer's inner workings to natural human language for specifying systems, but where the specification must still unambiguously describe the business interest the program needs to solve. We already have a profession for unambiguously specifying complex systems with multiple stakeholders and possibly complex interactions between its parts in natural language. It's called a legislator and it's very much not an unskilled job.
I understand esoteric as something that's often either fundamentally difficult to grasp (ie. an esoteric concept described in a short cryptic text might not be comprehensively explainable with a text five times longer that would be straightforward to write by anyone who understands the subject matter) or intentionally written in a way to keep it obscured from a cursory reading. The definition of hieratic doesn't really connote conceptual difficulty beyond mundane technical complexity or a particular intention to keep things hidden, just that writing can be made much more terse if you assume an audience that is already familiar with what it's talking about.
I'm somewhat confused why Nolan Funeral Home is one of the organizations you needed to contact about panspermia contagion, via some random person's memorial page. Is this some kind of spam program gone awry?
Why not fill the detergent compartment immediately after emptying the dishwasher? Then you have closed detergent slot -> dirty dishes, open detergent slot -> clean dishes.
Have you run the numbers on these? For example
there are never two different subjects claiming to have been the same person
sounds like a case of the Birthday paradox. Assume there's order of magnitude 10^11 dead people since 8000 BCE. So if you have a test group of, say, 10 000 reincarnation claimants and all of them can have memories of any dead person, already claimed or not, what's the probability of you actually observing two of them claiming the same dead person?
The bit about the memories always being from dead people is a bit more plausible. We se...
But I’m curious now, is there a fairly sizable contingent of academic/evidential dualists in the rationalist community?
It's more empirical than ideological for me. There are these pockets of "something's not clear here", where similar things keep being observed, don't line up with any current scientific explanation, and even people who don't seem obviously biased start going "hey, something's off here". There's the recent US Navy UFO sightings thing that nobody seems to know what to make of, there's Darryl Bem's 2011 ESP study that follows stuff by peo...
Any thoughts on Rupert Sheldrake? Complex memories showing up with no plausible causal path sounds a lot like his morphic resonance stuff.
Also, old thing from Ben Goertzel that might be relevant to your interests, Morphic Pilot Theory hypothesizes some sort of compression artifacts in quantum physics that can pop up as inexplicable paranormal knowledge.
Still makes sense if you listen when walking or driving when you couldn't read a book anyway. I mostly listen to podcasts instead of audiobooks though, a book is a really long commitment compared to a podcast episode.
Podcast transcription services probably. They seem to cost around $1 per minute nowadays. I expect they'll keep getting disrupted by AI. There's already audio transcription AIs like the autogenerated subtitles on youtube, but they get context-dependent ambiguous words wrong. Seems like an obvious idea to plug them to a GPT style language model that can recognize the topic being talked about and uses that to pick an appropriate transcription for homonyms.
You seem to be claiming that whatever does get discovered, which might be interpreted as proof of the spiritual in another climate, will get distorted to support the materialist paradigm. I'm not really sure how this would work in practice. We already have a something of a precommitment to what we expect something "supernatural" to look like, ontologically basic mental entities. So far the discoveries of science have been nothing like that, and if new scientific discoveries suddenly were, I find it very hard to imagine quite many people outside of the "pri...
Are people here mostly materialists?
Okay, since you seem interested in knowing why people are materialists. I think it's the history of science up until now. The history of science has basically been a constant build-up of materialism.
We started out at prehistoric animism where everything happening except that rock you just threw at another rock was driven by an intangible spirit. The rock wasn't since that was just you throwing it. And then people started figuring out successive compelling narratives about how more complex stuff is just rocks being thr...
OP might be some sort of content farming sockpuppet. No activity other than this post, and this was posted within a minute of a (now deleted) similarly vacuous post from a different account with no prior site activity as well.
In a Facebook post I argued that it’s fair to view these things as alive.
Just a note, unlike in the recent past, Facebook post links seem to now be completely hidden unless you are logged into Facebook when opening them, so they are basically broken as any sort of publicly viewable resource.
Well, that's just terrible.
Here's the post:
...I think the world makes more sense if you recognize humans aren't on the top of the food chain.
We don't see this clearly, kind of like ants don't clearly see anteaters. They know something is wrong, and they rush around trying to deal with it, but it's not like any ant recognizes the predator in much more detail than "threat".
There's a whole type of living being "above" us the way animals are "above" ants.
Esoteric traditions sometimes call these creatures "egregores".
Carl Jung called a special subset of them "arch
You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there's some initial value in just trying to name things more precisely though, and painting a target of "we don't understand this region that has a name now nearly as well as we'd like" on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he's basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI ...
You really do have to gesture vaguely, and then say “GO DO THINGS YOU DON’T KNOW HOW TO DO”, and guide them to reflect on what they’re doing when they don’t know what they’re doing.
This is pretty much what I'm referring as the "mystery", it's not that it's fundamentally obscure, it's just that the expected contract of teaching of "I will tell you how to do what I expect you to do in clear language" breaks down at this point, and instead you would need to say "I've been giving you many examples that work backwards from a point where the problem has alrea...
A fully meta-rational workplace is still sorta waffly about how the you actually accomplish the thing, but feels like an okay example of showing meta-rationality as "the thing you do when you come up with the rules, procedures and frameworks for (Chapman's) rational level at the point of facing undifferentiated reality without having any of those yet".
People have argued that this is still just rationality in the Lesswrong sense, but I think Chapman's on to something in that the rules, procedures and frameworks layer is very teachable and generally explicab...
Hello new user mocny-chlapik who dropped in to tell us that talking about AGI is incoherent because of Popper, welcome to Less Wrong. Are you by chance friends with new user Hickey who dropped in a week ago to tell us that talking about AGI is incoherent because of Popper?
Also a good point, though this is maybe a different thing from the deliberate effort thing again. The whole concept of "be equal to the [top visible person] in [field of practice]" sounds like a weak warning signal to me if it's the main desire in your head. This sounds like a mimetic desire thing where [field of practice] might actually be irrelevant to whatever is ticking away in your head and the social ladder game is what's actually going on.
A healthier mindset might be "I really want to make concepts that confuse me clearer", "I have this really cool-...
If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
Wasn't Arbital pretty much supposed to be this?
It’s totally possible to think there’s a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).
If we believe there is a plain causal explanation, that rules out some explanations we could imagine. It shouldn't now be possible for humans to have been created by a supernatural agency (as was widely thought in...
Reductionism is not just the claim that things are made out of parts. It’s a claim about explanation, and humans might not be smart enough to perform certainly reductions.
So basically the problem is that we haven't got the explanation yet and can't seem to find it with a philosopher's toolkit? People have figured out a lot of things (electromagnetism, quantum physics, airplanes, semiconductors, DNA, visual cortex neuroscience) by mucking with physical things while having very little idea of them beforehand by just being smart and thinking hard. Seems li...
The dark age might have gotten darker recently. Everyone's scrabbling around trying to figure out what AI will mean for programming as a profession going forward, and AI mostly only boosts established languages it has large corpora of working code for.
I've been following the Rust project for the last decade and have been impressed at just how much peripheral scutwork contributes to making the language and ecosystem feel solid. This stuff is a huge undertaking. I'm not terribly excited any more about incremental improvement languages. They seem to be mostly... (read more)