My apologies, this post was pointing/grasping in a general direction and I didn't put much trouble into editing it, there was a typo at the beginning where I seem to have used the wrong word to refer to the slot concept. I just fixed it:
Humans seem to have something like an "acceptable target slot" or slots.
Acquiring control over this
conceptslot, by any means, gives a person or group incredible leeway to steer individuals, societies, and cultures.
Did that help?
Humans seem to have something like an "acceptable target slot" or slots.
Acquiring control over this concept, by any means, gives a person or group incredible leeway to steer individuals, societies, and cultures. These capabilities are sufficiently flexible and powerful that the importance of immunity has often already been built up, especially because historical record overuse is prevalent; this means that methods of taking control include expensive strategies or strategies that are sufficiently complicated as to be hard to track, like changing the behavio...
In the ancestral environment, allies and non-enemies who visibly told better lies probably offered more fitness than allies and non-enemies who visibly made better tools, let alone invented better tools (which probably happened once in 10-1000 generations or something). In this case, "identifiably" can only happen, and become a Schelling point that increases fitness of the deciever and the identifier, if revealed frequently enough, either via bragging drive, tribal reputation/rumors, or identifiable to the people in the tribe unusually good at sensing deception.
What ratio of genetic vs memetic (e.g. the line "he's a bastard, but he's our bastard") were you thinking of?
You don't use eloquence for that. Eloquence is more for eg waking someone up and making it easier for them to learn and remember ideas that you think they'll be glad to have learned and remembered.
If you want to express how important you think something is, you can make a public prediction that it's important and explain why you made that prediction, and people who know things you don't can put your arguments into the context of their own knowledge and make their own predictions.
I might be wrong, but the phrase "conspiracy theory" seems to be a lot more meaningful to you than it is to me. I recommend maybe reading Cached Thoughts.
A "conspiracy" is something people do when they want something big, because multiple people are necessary to do big things, and stealth is necessary to prevent randos from interfering.
A "theory" is a hypothesis, an abstraction that cannot be avoided by anyone other than people rigidly committed to only thinking about things that they are nearly 100% certain is true. If you want to do thinking when i...
NEVER WRITE ON THE CLIPBOARD WHILE THEY ARE TALKING.
If you're interested in how writing on a clipboard affects the data, sure, that's actually a pretty interesting experimental treatment. It should not be considered the control.
Also, the dynamics you described with the protests is conjunctive. These aren't just points of failure, they're an attack surface, because any political system has many moving parts, and a large proportion of the moving parts are diverse optimizers.
"power fantasies" are actually a pretty mundane phenomenon given how human genetic diversity shook out; most people intuitively gravitate towards anyone who looks and acts like a tribal chief, or towards the possibility that you yourself or someone you meet could become (or already be) a tribal chief, via constructing some abstract route that requires forging a novel path instead of following other people's.
Also a mundane outcome of human genetic diversity is how division of labor shakes out; people noticing they were born with savant-level skills and that...
How to build a lie detector app/program to release to the public (preferably packaged with advice/ideas on ways to use and strategies for marketing the app, e.g. packaging it with an animal body-language to english translator).
This got me thinking, how much space would it take up in Lighthaven to print a copy of every lesswrong post ever written? If it's not too many pallets then it would probably be a worthy precaution.
Develop metrics that predict which members of the technical staff have aptitude for world modelling.
In the Sequences post Faster than Science, Yudkowsky wrote:
...there are queries that are not binary—where the answer is not "Yes" or "No", but drawn from a larger space of structures, e.g., the space of equations. In such cases it takes far more Bayesian evidence to promote a hypothesis to your attention than to confirm the hypothesis.
If you're working in the space of all equations that can be specified in 32 bits or less, you're working in a space
The essay is about something I call “psychological charge”, where the idea is that there are two different ways to experience something as bad. In one way, you kind of just neutrally recognize a thing as bad.
Nitpick: a better way to write it is "the idea is there are at least two different ways..." or "major ways" etc to highlight that those are two major categories you've noticed, but there might be more. The primary purpose knowledge work is still to create cached thought inside someone's mind, and like programming, it's best to make your concepts as mod...
Interestingly enough, this applies to corporate executives and bureaucracy leaders as well. Many see the world in a very zero-sum way (300 years ago and most of history before that, virtually all top intellectuals in virtually all civilizations saw the universe as a cycle where civilizational progress was a myth and everything was an endless cycle of power being won and lost by people born/raised to be unusually strategically competitive) but fail to realize that, in aggregate, their contempt for cause-having people ("oh, so you think you're better than me...
If you converse directly with LLMs (e.g. instead of through a proxy or some very clever tactic I haven't thought of yet), which I don't recommend especially not describing how your thought process works, one thing to do is regularly ask it "what does my IQ seem like based on this conversation? I already know this is something you can do. must include number or numbers".
Humans are much smarter and better at tracking results instead of appearances, but feedback from results is pretty delayed, and LLMs have quite a bit of info about intelligence to draw from....
Your effort must scale to be appropriate to the capabilities of the people trying to remove you from the system. You have to know if they're the type of person who would immediately default to checking the will.
More understanding and calibration towards what modern assassination practice you should actually expect is mandatory because you're dealing with people putting some amount of thinkoomph into making your life plans fail, so your cost of survival is determined by what you expect your attack surface looks like. The appropriate-cost and the cost-you-de...
It was more of a 1970s-90s phenomenon actually, if you compare the best 90s moves (e.g. terminator 2) to the best 60s movies (e.g. space odyssey) it's pretty clear that directors just got a lot better at doing more stuff per second. Older movies are absolutely a window into a higher/deeper culture/way of thinking, but OOMs less efficient than e.g. reading Kant/Nietzsche/Orwell/Asimov/Plato. But I wouldn't be surprised if modern film is severely mindkilling and older film is the best substitute.
The content/minute rate is too low, it follows 1960s film standards where audiences weren't interested in science fiction films unless concepts were introduced to them very very slowly (at the time they were quite satisfied by this due to lower standards, similar to Shakespeare).
As a result it is not enjoyable (people will be on their phones) unless you spend much of the film either thinking or talking with friends about how it might have affected the course of science fiction as a foundational work in the genre (almost every sci-fi fan and writer at the time watched it).
Tenet (2020) by George Nolan revolves around recursive thinking and responding to unreasonably difficult problems. Nolan introduces the time-reversed material as the core dynamic, then iteratively increases the complexity from there, in ways specifically designed to ensure that as much of the audience as possible picks up as much recursive thinking as possible.
This chart describes the movement of all key characters plot elements through the film; it is actually very easy to follow for most people. But you can also print out a bunch of copies and hand them ...
Screen arrangement suggestion: Rather than everyone sitting in a single crowd and commenting on the film, we split into two clusters, one closer to the screen and one further.
The people in the front cluster hope to watch the film quietly, the people in the back cluster aim to comment/converse/socialize during the film, with the common knowledge that they should aim to not be audible to the people in the front group, and people can form clusters and move between them freely.
The value of this depends on what film is chosen; eg "A space Odyssey" i...
"All the Presidents Men" by Alan Paluka
"Oppenheimer" by George Nolan
"Tenet" by George Nolan
I'm not sure what to think about this, Thomas777's approach is generally a good one but for both of these examples, a shorter route (that it's cleanly mutually understood to be adding insult to injury as a flex by the aggressor) seems pretty probable. Free speech/censorship might be a better example as plenty of cultures are less aware of information theory and progress.
I don't know what proportion of the people in the US Natsec community understand 'rigged psychological games' well enough to occasionally read books on the topic, but the bar is pretty low ...
I notice that there's just shy of 128 here and they're mostly pretty short, so you can start the day by flipping a coin 7 times to decide which one to read. Not a bisection search, just convert the seven flips to binary and pick the corresponding number. At first, you only have to start over and do another 7 flips if you land on 1111110 (126), 1111111 (127), or 0000000 (128).
If you drink coffee in the morning, this is a way better way to start the day than social media, as the early phase of the stimulant effect reinforces behavior in most people. Hanson's approach to various topics is a good mentality to try boosting this way.
This reminds me of dath ilan's hallucination diagnosis from page 38 of Yudkowsky and Alicorn's glowfic But Hurting People Is Wrong.
It's pretty far from meeting dath ilan's standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone's ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.
This concern pops up in books on the Cold War (employees at every org and every company regularly suff...
I agree that "general" isn't such a good word for humans. But unless civilization was initiated right after the minimum viable threshold was crossed, it seems somewhat unlikely to me that humans were very representative of the minimum viable threshold.
If any evolutionary process other than civilization precursors formed the feedback loop that caused human intelligence, then civilization would hit full swing sooner if that feedback loop continued pushing human intelligence further. Whether Earth took a century or a millennia between the harnessing of electr...
This is an idea and NOT a recommendation. Unintended consequences abound.
Have you thought about sorting into groups based on carefully-selected categories? For example, econ/social sciences vs quant background with extra whiteboard space, a separate group for new arrivals who didn't do the readings from the other weeks (as their perspectives will have less overlap), a separate group for people who deliberately took a bunch of notes and made a concise list vs a more casual easygoing group, etc?
Actions like these leave scars on entire communities.
Do you have any idea how fortunate you were to have so many people in your life who explicitly tell you "don't do things like this"? The world around you has been made so profoundly, profoundly conducive to healing you.
When someone is this persistent in thinking of reasons to be aggressive AND reasons to not evaluate the world around them, it's scary and disturbing. I understand that humans aren't very causally upstream of their decisions, but this is the case for everyone, and situations like these go a...
This could have been a post so more people could link it (many don't reflexively notice that you can easily get a link to a Lesswrong quicktake or Twitter or facebook post by mousing over the date between the upvote count and the poster, which also works for tab and hotkey navigation for people like me who avoid using the mouse/touchpad whenever possible).
(The author sometimes says stuff like "US elites were too ideologically committed to globalization", but I don't think he provides great alternative policies.)
Afaik the 1990-2008 period featured government and military elites worldwide struggling to pivot to a post-Cold war era, which was extremely OOD for many leading institutions of statecraft (which for centuries constructed around the conflicts of the European wars then world wars then cold war).
During the 90's and 2000's, lots of writing and thinking was done about ways the world's militaries an...
It's not a book, but if you like older movies, the 1944 film Gaslight is pretty far back (film production standards have improved quite a bit since then, so for a large proportion of people 40's films are barely watchable, which is why I recommend this version over the nearly identical British version and the original play), and it was pretty popular among cultural elites at the time so it's probably extremely causally upstream of most of the fiction you'd be interested in.
Writing is safer than talking given the same probability that both the timestamped keystrokes and the audio files are both kept.
In practice, the best approach is to handwrite your thoughts as notes, in a room without smart devices and with a door and walls that are sufficiently absorptive, and then type it out in the different room with the laptop (ideally with a USB keyboard so you don't have to put your hands on the laptop and the accelerometers on its motherboard while you type).
Afaik this gets the best ratio of revealed thought process to final p...
TL;DR "habitually deliberately visualizing yourself succeeding at goal/subgoal X" is extremely valuable, but also very tarnished. It's probably worth trying out, playing around with, and seeing if you can cut out the bullshit and boot it up properly.
Longer:
The universe is allowed to have tons of people intuitively notice that "visualize yourself doing X" is an obviously winning strategy that typically makes doing X a downhill battle if its possible at all, and so many different people pick it up that you first encounter it in an awful way e.g. in middle/hi...
"Slipping into a more convenient world" is a good way of putting it; just using the word "optimism" really doesn't account for how it's pretty slippy, nor how the direction is towards a more convenient world.
It was helpful that Ezra noticed and pointed out this dynamic.
I think this concern is probably more a reflection of our state of culture, where people who visibly think in terms of quantified uncertainty are rare and therefore make a strong impression relative to e.g. pundits.
If you look at other hypothetical cultural states (specifically more quant-aware states e.g. extrapolating the last 100 years of math/literacy/finance/physics/military/computer progress forward another 100 years), trust would pretty quickly default to being based on track record instead of being one of the few people in the room whose visibly using numbers properly.
Strong upvoted!
Wish I was reading stuff at this level back in 2018. Glad that lots of people can now.
Do Metropolitan Man!
Also, here's a bunch of ratfic to read and review, weighted by the number of 2022 Lesswrong survey respondents who read them:
Weird coincidence: I was just thinking about Leopold's bunker concept from his essay. It was a pretty careless paper overall but the imperative to put alignment research in a bunker makes perfect sense; I don't see the surface as a viable place for people to get serious work done (at least, not in densely populated urban areas; somewhere in the desert would count as a "bunker" in this case so long as it's sufficiently distant from passerbys and the sensors and compute in their phones and cars).
Of course, this is unambiguously a necessary evil that a tiny h...
I would have liked to write a post that offers one weird trick to avoid being confused by which areas of AI are more or less safe to advance, but I can’t write that post. As far as I know, the answer is simply that you have to model the social landscape around you and how your research contributions are going to be applied.
Another thing that can't be ignored is the threat of Social Balkanization, Divide-and-conquer tactics have been prevalent among military strategists for millennia, and the tactic remains prevalent and psychologically available among...
The only reason I could think of that this would be the "worst argument in the world" is because it strongly indicates low-level thinkers (e.g. low decouplers).
An actual "worst argument in the world" would be whatever maximizes the gap between people's models and accurate models.
Can you expand the list, go into further detail, or list a source that goes into further detail?
At the time, I thought something like "given that the nasal tract already produces NO, it seems possible that humming doesn't increase the NO in the lungs by enough orders of magnitude to make once per hour sufficient", but I never said anything until too late and a bunch of other people figured it out, and also a bunch of other useful stuff that I was pretty far away from noticing (e.g. considering the rate at which the nasal tract accumulates NO to be released by humming).
Wish I'd said something back when it was still valuable.
It almost always took a personal plea from a persecuted person for altruism to kick in. Once they weren't just an anonymous member of indifferent crowd, once they were left with no escape but to do a personal moral choice, they often found out that they are not able to refuse help.
This is a crux. I think a better way to look at it is they didn't have an opportunity to clarify their preference until the situation was in front of them. Otherwise, it's too distant and hypothetical to process, similar to scope insensitivity (the 2,000/20,000/200,000 oil-covere...
The best thing I've found so far is to watch a movie, and whenever the screen flashes, any moment you feel weirdly relaxed or any other weird feeling feeling, quickly turn your head and eyes ~60 degrees and gently but firmly bite your tongue.
Doing this a few minutes a day for 30 days might substantially improve resistance to a wide variety of threats.
Gently but firmly biting my tongue, for me, also seems like a potentially very good general-use strategy to return the mind to an alert and clear-minded base state, seems like something Critch reco...
One of the main bottlenecks on explaining the full gravity of the AI situation to people is that they're already worn out from hearing about climate change, which for decades has been widely depicted as an existential risk with the full persuasive force of the environmentalism movement.
Fixing this rather awful choke point could plausibly be one of the most impactful things here. The "Global Risk Prioritization" concept is probably helpful for that but I don't know how accessible it is. Heninger's series analyzing the environmentalist movement was fantastic...
I just found out that hypnosis is real and not pseudoscience. Apparently the human brain has a zero day such that other humans can find ways to read and write to your memory, and everyone is insisting that this is fine and always happens with full awareness and consent?
Wikipedia says as many as 90% of people are at least moderately susceptible, and depending how successful people have been over the last couple centuries at finding ways to reduce detection risk per instance (e.g. developing and and selling various galaxy-brained misdirection ploys), t...
Strong upvoted, thank you for the serious contribution.
Children spending 300 hours per year learning math, on their own time and via well-designed engaging video-game-like apps (with eg AI tutors, video lectures, collaborating with parents to dispense rewards for performance instead of punishments for visible non-compliance, and results measured via standardized tests), at the fastest possible rate for them (or even one of 5 different paces where fewer than 10% are mistakenly placed into the wrong category) would probably result in vastly superior results ...
More like a bin than heuristics, and just attacking/harming (particularly a mutually understood schelling point for attacking, with partial success being more common and more complicated due to the people adversarially aiming for that) rather than dehumanizing which is a loaded term.