It is both absurd, and intolerably infuriating, just how many people on this forum think it's acceptable to claim they have figured out how qualia/consciousness works, and also not explain how one would go about making my laptop experience an emotion like 'nostalgia', or present their framework for enumerating the set of all possible qualitative experiences[1]. When it comes to this particular subject, rationalists are like crackpot physicists with a pet theory of everything, except rationalists go "Huh? Gravity?" when you ask them to explain how their theory predicts gravity, and then start arguing with you about gravity needing to be something explained by a theory of everything. You people make me want to punch my drywall sometimes.
For the record: the purpose of having a "theory of consciousness" is so it can tell us which blobs of matter feel particular things under which specific circumstances, and teach others how to make new blobs of matter that feel particular things. Down to the level of having a field of AI anaesthesiology. If your theory of consciousness does not do this, perhaps because the sum total of your brilliant insights are "systems feel 'things' when they're, y'...
or present their framework for enumerating the set of all possible qualitative experiences (Including the ones not experienced by humans naturally, and/or only accessible via narcotics, and/or involve senses humans do not have or have just happened not to be produced in the animal kingdom)
Strongly agree. If you want to explain qualia, explain how to create experiences, explain how each experience relates to all other experiences.
I think Eliezer should've talked more about this in The Fun Theory Sequence. Because properties of qualia is a more fundamental topic than "fun".
And I believe that knowledge about qualia may be one of the most fundamental types of knowledge. I.e. potentially more fundamental than math and physics.
I think Eliezer should've talked more about this in The Fun Theory Sequence. Because properties of qualia is a more fundamental topic than "fun".
I think Eliezer just straight up tends not to acknowledge that people sometimes genuinely care about their internal experiences, independent of the outside world, terminally. Certainly, there are people who care about things that are not that, but Eliezer often writes as if people can't care about the qualia - that they must value video games or science instead of the pleasure derived from video games or science.
His theory of fun is thus mostly a description of how to build a utopia for humans who find it unacceptable to "cheat" by using subdermal space heroin implants. That's valuable for him and people like him, but if aligned AGI gets here I will just tell it to reconfigure my brain not to feel bored, instead of trying to reconfigure the entire universe in an attempt to make monkey brain compatible with it. I sorta consider that preference a lucky fact about myself, which will allow me to experience significantly more positive and exotic emotions throughout the far future, if it goes well, than the people who insist they must only feel satisfied after literally eating hamburgers or reading jokes they haven't read before.
This is probably part of why I feel more urgency in getting an actually useful theory of qualitative experience than most LW users.
Why would you expect anyone to have a coherent theory of something they can’t even define and measure?
Because they say so. The problem then is why they think they have a coherent theory of something they can't define or measure.
The Prediction Market Discord Message, by Eva_:
Current market structures can't bill people for the information value that went into the market fairly, can't fairly handle secret information known to only some bidders, pays out most of the subsidy to whoever corrects the naive bidder fastest even though there's no benefit to making it a race, offers almost no profit to people trying to defend the true price from incorrect bidders unless they let the price shift substantially first, can't be effectively used to collate information known by different bidders, can't handle counterfactuals / policy conditionals cleanly, implement EDT instead of LDT, let you play games of tricking other bidders for profit and so require everyone to play trading strategies that are inexploitable even if less beneficial, can't defend against people who are intentionally illegible as to whether they have private information or are manipulating the market for profit elsewhere...
...But most of all, prediction markets contain supposedly ideal economic actors who don't really suspect each other of dishonesty betting money against each other even though it's a net-zero trade and the aggreeement theorem says th
I’m pretty interested in this as an exercise of ‘okay yep a bunch of those problems seem real. Can we make conceptual or mechanism-design progress on them in like an afternoon of thought?’
To the LW devs - just want to mention that this website is probably now the most well designed forum I have ever used. The UX is almost addictively good and I've been loving all of the little improvements over the past year or so.
Pretty much ~everybody on the internet I can find talking about the issue both mischaracterizes and exaggerates the extent of child sex work inside the United States, often to a patently absurd degree. Wikipedia alone reports that there are anywhere from "100,000-1,000,000" child prostitutes in the U.S. There are only ~75 million children in the U.S., so I guess Wikipedia thinks it's possible that more than 1% of people aged 0-17 are prostitutes. As in most cases, these numbers are sourced from "anti sex trafficking" organizations that, as far as I can tell, completely make them up.
Actual child sex workers - the kind that get arrested, because people don't like child prostitution - are mostly children who pass themselves off as adults in order to make money. Part of the confusion comes from the fact that the government classifies any instance of child prostitution as human trafficking, regardless of whether or not there's evidence the child was coerced. Thus, when the Department of Justice reports that federal law enforcement investigated "2,515 instances of suspected human trafficking" from 2008-2010, and that "forty percent involved prostitution of a child or child sexual exploit...
The Nick Bostrom fiasco is instructive: never make public apologies to an outrage machine. If Nick had just ignored whoever it was trying to blackmail him, it would have been on them to assert the importance of a twenty-five year old deliberately provocative email, and things might not have ascended to the point of mild drama. When he tried to "get ahead of things" by issuing an apology, he ceded that the email was in fact socially significant despite its age, and that he did in fact have something to apologize for, and so opened himself up to the Standard Replies that the apology is not genuine, he's secretly evil etc. etc.
Instead, if you are ever put in this situation, just say nothing. Don't try to defend yourself. Definitely don't volunteer for a struggle session.
Treat outrage artists like the police. You do not prevent the police from filing charges against you by driving to the station and attempting to "explain yourself" to detectives, or by writing and publishing a letter explaining how sorry you are. At best you will inflate the airtime of the controversy by responding to it, at worst you'll be creating the controversy in the first place.
The problem with trade agreements as a tool for maintaining peace is that they only provide an intellectual and economic reason for maintaining good relations between countries, not an emotional once. People's opinions on war rarely stem from economic self interest. Policymakers know about the benefits and (sometimes) take them into account, but important trade doesn't make regular Americans grateful to the Chinese for providing them with so many cheap goods - much the opposite, in fact. The number of people who end up interacting with Chinese people or intuitively understanding the benefits firsthand as a result of expanded business opportunities is very small.
On the other hand, video games, social media, and the internet have probably done more to make Americans feel aligned with the other NATO countries than any trade agreement ever. The YouTubers and Twitch streamers I have pseudosocial relationships with are something like 35% Europeans. I thought Canadians spoke Canadian and Canada was basically some big hippie commune right up until my minecraft server got populated with them. In some weird alternate universe where people are suggesting we invade Canada, my first instinctual...
The "people are wonderful" bias is so pernicious and widespread I've never actually seen it articulated in detail or argued for. I think most people greatly underestimate the size of this bias, and assume opinions either way are a form of mind-projection fallacy on the part of nice/evil people. In fact, it looks to me like this skew is the deeper origin of a lot of other biases, including the just-world fallacy, and the cause of a lot of default contentment with a lot of our institutions of science, government, etc. You could call it a meta-bias that causes the Hansonian stuff to go largely unnoticed.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it's important but my writing skills are lacking.
So apparently in 2015 Sam Altman said:
Serious question: Is he a comic book supervillain? Is this world actually real? Why does this quote not garner an emotive reaction out of anybody but me?
I was surprised by this quote. On following the link, the sentence by itself seems noticeably out of context; here's the next part:
On the growing artificial intelligence market: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
On what Altman would do if he were President Obama: “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Altman also shared that he recently invested in a company doing "AI safety research" to investigate the potential risks of artificial intelligence.
PSA: I have realized very recently after extensive interactive online discussion with rationalists, that they are exceptionally good at arguing. Too good. Probably there's some inadvertent pre- or post- selection for skill at debating high concept stuff going on.
Wait a bit until acceding to their position in a live discussion with them where you start by disagreeing strongly for maybe intuitive reasons and then suddenly find the ground shifting beneath your feet. It took me repeated interactions where I only later realized I'd been hoodwinked by faulty reasoning to notice the pattern.
I think in general believing something before you have intuition around it is unreliable or vulnerable to manipulation, even if there seems to be a good System 2 reason to do so. Such intuition is specialized common sense, and stepping outside common sense is stepping outside your goodhart scope where ability to reliably reason might break down.
So it doesn't matter who you are arguing with, don't believe something unless you understand it intuitively. Usually believing things is unnecessary regardless, it's sufficient to understand them to make conclusions and learn more without committing to belief. And certainly it's often useful to make decisions without committing to believe the premises on which the decisions rest, because some decisions don't wait on the ratchet of epistemic rationality.
Interesting whitepill hidden inside Scott Alexander's SB 1047 writeup was that lying doesn't work as well as predicted in politics. It's possible that if the opposition had lied less often, or we had lied more often, the bill would not have gotten a supermajority in the senate.
Many of you are probably wondering what you will do if/when you see a polar bear. There's a Party Line, uncritically parroted by the internet and wildlife experts, that while you can charge/intimidate a black bear, polar bears are Obligate Carnivores and the only thing you can do is accept your fate.
I think this is nonsense. A potential polar bear attack can be defused just like a black bear attack. There are loads of youtube videos of people chasing Polar Bears away by making themselves seem big and aggressive, and I even found some indie documentaries of people who went to the arctic with expectations of being able to do this. The main trick seems to be to resist the urge to run away, make yourself look menacing, and commit to warning charges in the bear's general direction until it leaves.
Does anybody here have any strong reason to believe that the ML research community norm of "not taking AGI discussion seriously" stems from a different place than the oil industry's norm of "not taking carbon dioxide emission discussion seriously"?
I'm genuinely split. I can think of one or two other reasons there'd be a consensus position of dismissiveness (preventing bikeshedding, for example), but at this point I'm not sure, and it affects how I talk to ML researchers.
Lie detection technology is going mainstream. ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven't.
Hey [anonymous]. I see you deactivated your account. Hope you're okay! Happy to chat if you want on Signal at five one oh, nine nine eight, four seven seven one (also a +1 at the front for US country code).
(Follow-up: [anonymous] reached out, is doing fine.)
Now is the time to write to your congressman and (may allah forgive me for uttering this term) "signal boost" about actually effective AI regulation strategies - retroactive funding for hitting interpretability milestones, good liability rules surrounding accidents, funding for long term safety research. Use whatever contacts you have, this week. Congress is writing these rules now and we may not have another chance to affect them.
Noticed something recently. As an alien, you could read pretty much everything Wikipedia has on celebrities, both on individual people and the general articles about celebrity as a concept... And never learn that celebrities tend to be extraordinarily attractive. I'm not talking about an accurate or even attempted explanation for the tendency, I'm talking about the existence of the tendency at all. I've tried to find something on wikipedia that states it, but that information just doesn't exist (except, of course, implicitly through photographs).
It's quite odd, and I'm sure it's not alone. "Celebrities are attractive" is one obvious piece of some broader set of truisms that seem to be completely missing from the world's most complete database of factual information.
The olympics are really cool. I appreciate that they exist. There's some timelines out there where they don't have an Olympics and nobody notices anything is wrong.
Let me put in my 2c now that the collapse of FTX is going to be mostly irrelevant to effective altruism except inasmuch as EA and longtermist foundations no longer have a bunch of incoming money from Sam Bankman Fried. People are going on and on about the "PR damage" to EA by association because a large donor turned out to be a fraud, but are failing to actually predict what the concrete consequences of such a "PR loss" are going to be. Seems to me like they're making the typical fallacy of overestimating general public perception[1]'s relevance to an insular ingroup's ability to accomplish goals, as well as the public's attention span in the first place.
As measured by what little Rationalists read from members of the public while glued to Twitter for four hours each day.
I think P(DOOM) is fairly likely (maybe 60%) and working on AI research or accelerating AI race dynamics independently is one of the worst things you can do. I do not endorse improving the capabilities of frontier models and think humanity would benefit if you worked on other things instead.
That said, I hope Anthropic retains a market lead, ceteris paribus. I think there's a lot of ambiguous parts of the standard AI risk thesis, and that there's a strong possibility we get reasonablish alignment with a few quick creative techniques at the finish like faithful CoT. If that happens I expect it might be because Anthropic researchers decided to pull back and use their leverage to coordinate a pause. I also do not see what Anthropic could do from a research front at this point that would make race dynamics even worse than they already are, besides split up the company. I also do not want to live in a world entirely controlled by Sam Altman, and think that could be worse than death.
Falling birthrates is the climate change of the right:
Most justice systems seem to punish theft on a log scale. I'm not big on capital punishment, but it is actually bizarre that you can misplace a billion dollars of client funds and escape the reaper in a state where that's done fairly regularly. The law seems to be saying: "don't steal, but if you do, think bigger."
I don't agree with the take about net worth. The fine should just be whatever makes the state ambivalent about the externalities of speeding. If Bill Gates wants to pay enormous taxes to speed aggressively then that would work too.
LessWrong as a website has gotten much more buggy for me lately. 6 months ago it worked like clockwork, but recently I'm noticing that refreshes on my profile page take something like 18 seconds to complete, or even 504 (!). I'm trying to edit my old "pessimistic alignment" post now and the interface is just not letting me; the site just freezes for a while and then refuses to put the content in the text box for me to edit.
It doesn't feel like I'm getting smarter. It feels like everybody else is getting dumber. I feel as smart as I was when I was 14.
In worlds where status is doled out based on something objective, like athletic performance or money, there may be lots of bad equilibria & doping, and life may be unfair, but at the end of the day competitors will receive the slack to do unconventional things and be incentivized to think rationally about the game and their place in it.
In worlds where status is doled out based on popularity or style, like politics or Twitter, the ideal strategy will always be to mentally bully yourself into becoming an inhuman goblin-sociopath, and keep hardcoded blind spots. Naively pretending to be the goblin in the hopes of keeping the rest of your epistemics intact is dangerous in these arenas; others will prod your presentation and try to reveal the human underneath. The lionized celebrities will be those that embody the mask to some extent, completely shaving off the edges of their personality and thinking and feeling entirely in whatever brand of riddlespeak goes for truth inside their subculture.
A surprisingly large amount of people seem to apply statuslike reasoning toward inanimate goods. To many, if someone sells a coin or an NFT for a very high price, this is not merely curious or misguided: it's outright infuriating. They react as if others are making a tremendous social faux pas - and even worse, that society is validating their missteps.
Sometimes people say "before we colonize Mars, we have to be able to colonize Antarctica first".
What are the actual obstacles to doing that? Is there any future tech somewhere down the tree that could fix its climate, etc.?
The Antarctic Treaty (and subsequent treaties) forbid colonization. They also forbid extraction of useful resources from Antarctica, thereby eliminating one of the main motivations for colonization. They further forbid any profitable capitalist activity on the continent. So you can’t even do activities that would tend toward permanent settlement, like surveying to find mining opportunities, or opening a tourist hotel. Basically, the treaty system is set up so that not only can’t you colonize, but you can’t even get close to colonizing.
Northern Greenland is inhabited, and it’s at a similar latitude.
(Begin semi-joke paragraph) I think the US should pull out of the treaty, and then announce that Antarctica is now part of the US, all countries are welcome to continue their purely scientific activity provided they get a visa, and announce the continent is now open to productive activity. What’s the point of having the world’s most powerful navy if you can’t do a fait accompli once in a while? Trump would love it, since it’s simultaneously unprecedented, arrogant and profitable. Biggest real estate development deal ever! It’s huuuge!
A man may climb the ladder all the way to the top, only to realize he’s on the wrong building.
"But someone would have blown the whistle! Someone would have realized that the whistle might be blown!"
I regret to tell you that most of the time intelligence officers just do what they're told.
Yes, if you have an illegal spying program running for ten years with thousands of employees moving in and out, that will run a low-grade YoY chance of being publicized. Management will know about that low-grade chance and act accordingly. But most of the time you as a civilian just never hear about what it is that intel agencies are doing, at least not for t...
It is hard for me to tell whether or not my not-using-GPT4 as a programmer is because I'm some kind of boomer, or because it's actually not that useful outside of filling Google's gaps.
If it did actually turn out that aliens had visited Earth, I'd be pretty willing to completely scrap the entire Yudkowskian implied-model-of-intelligent-species-development and heavily reevaluate my concerns around AI safety.
You don't hear much about the economic calculation problem anymore, because "we lack a big computer for performing economic calculations" was always an extremely absurd reason to dislike communism. The real problem with central planning is that most of the time the central planner is a dictator who has no incentive to run anything well in the first place, and gets selected by ruthlessness from a pool of existing apparatchiks, and gets paranoid about stability and goes on political purges.
What are some other, modern, "autistic" explanations for social dysfu...
Computer hacking is not a particularly significant medium of communication between prominent AI research labs, nonprofits, or academic researchers. Much more often than leaked trade secrets, ML people will just use insights found in this online repository called arxiv, where many of them openly and intentionally publish their findings. Nor (as far as I am aware) are stolen trade secrets a significant source of foundational insights for researchers making capabilities gains, local to their institution or otherwise.
I don't see this changing on its own,...
Every once in a while I'm getting bad gateway errors on Lesswrong. Thought I should mention it for the devs.
It is unnecessary to postulate that CEOs and governments will be "overthrown" by rogue AI. Board members in the future will insist that their company appoint an AI to run the company because they think they'll get better returns that way. Congressmen will use them to manage their campaigns and draft their laws. Heads of state will use them to manage their militaries and police agencies. If someone objects that their AI is really unreliable or doesn't look like it shares their values, someone else on the board will say "But $NFGM is doing the same thing; we...
Currently reading The Rise and Fall of the Third Reich for the first time. I've wanted to read a book about Nazi Germany for a while now, and tried more "modern" and "updated" books, but IMO they are still pretty inferior to this one. The recent books from historians I looked at were concerned more with an ideological opposition to Great Men theories than factual accuracy, and also simply failed to hold my attention. Newer books are also necessarily written by someone who wasn't there, and by someone who does not feel comfortable commenting about events fr...
I have been working on a detailed post for about a month and a half now about how computer security is going to get catastrophically worse as we get the next 3-10 years of AI advancements, and unfortunately reality is moving faster than I can finish it:
https://krebsonsecurity.com/2022/10/glut-of-fake-linkedin-profiles-pits-hr-against-the-bots/
Good rationalists have an absurd advantage over the field in altruism, and only a marginal advantage in highly optimized status arenas like tech startups. The human brain is already designed to be effective when it comes to status competitions, and systematically ineffective when it comes to helping other people.
So it's much more of a tragedy for the competent rationalist to choose to spend most of their time competing in those things than to shoot a shot at a wacky idea you have for helping others. You might reasonably expect to be better at it than 99% of the people who (respectably!) attempt to do so. Consider not burning that advantage!
In hindsight, it is literally "based theorem". It's a theorem about exactly how much to be based.
The greatest strategy for organizing vast conspiracies is usually failing to realize that what you're doing is illegal.
In the same way that Chinese people forgot how to write characters by hand, I think most programmers will forget how to write code without LLM editors or plugins pretty soon.
They may, but I think the AI code generators would have to be quite good. As long as the LLMs are merely complementing programming languages, I expect them to remain human-readable & writable; only once they are replacing existing programming languages do I expect serious inscrutability. Programming language development can be surprisingly antiquated and old-fashioned: there are many ways to design a language or encode it where it could be infeasible to 'write' it without a specialized program, and yet, in practice, pretty much every language you'll use which is not a domain-specific (usually proprietary) tool will let you write source code in a plain text editor like Notepad or nano.
The use of syntax highlighting goes back to at least the ALGOL report, and yet, something like 50 years later, there are not many languages which can't be read without syntax highlighting. In fact, there's very few which can't be programming just fine with solely ASCII characters in an 80-col teletype terminal, still. (APL famously failed to ever break out of a niche and all spiritual successors have generally found it wiser to at least provide a 'plain text' encoding; Fortress likewise never becam...
There's a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don't know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.
Serial murder seems like an extremely laborious task. For every actual serial killer out there, there have to be at least a hundred people who would really like to be serial killers, but lack the gumption or agency and just resign themselves to playing video games.
I sometimes read someone on here who disagrees fiercely with Eliezer, or has some kind of beef with standard LessWrong/doomer ideology, and instinctively imagine than they're different from the median LW user in other ways, like not being caricaturishly nerdy. But it turns out we're all caricaturishly nerdy.
There is a kind of decadence that has seeped into first world countries ever since they stopped seriously fearing conventional war. I would not bring war back in order to end the decadence, but I do lament that governments lack an obvious existential problem of a similar caliber, that might coerce their leaders and their citizenry into taking foreign and domestic policy seriously, and keep them devolving into mindless populism and infighting.
To the extent that "The Cathedral" was ever a real thing, I think whatever social mechanisms that supported it have begun collapsing or at least retreating to a fallback line in very recent years. Just a feeling.
Conspiracy theory: sometime in the last twenty years the CIA developed actually effective polygraphs and the government has been using them to weed out spies at intelligence agencies. This is why there haven't been any big American espionage cases in the past ten years or so.
Either post your NASDAQ 100 futures contracts or stop fronting near-term slow takeoff probabilities above ~10%.
If I was still a computer security engineer and had never found LessWrong, I'd probably be low key hyped about all of the new classes of prompt injection and social engineering bugs that ChatGPT plugins are going to spawn.
>be big unimpeachable tech ceo
>need to make some layoffs, but don't want to have to kill morale, or for your employees to think you're disloyal
>publish a manifesto on the internet exclaiming your corporation's allegiance to right-libertarianism or something
>half of your payroll resigns voluntarily without any purging
>give half their pay to the other half of your workforce and make an extra 200MM that year
Forcing your predictions, even if they rely on intuition, to land on nice round numbers so others don't infer things about the significant digits is sacrificing accuracy for the appearance of intellectual modesty. If you're around people who shouldn't care about the latter, you should feel free to throw out numbers like 86.2% and just clarify that your confidence is way outside 0.1%, if that's just the best available number for you to pick.
Every five years since I was 11 I've watched The Dark Knight thinking "maybe this time I'll find out it wasn't actually as good as I remember it being". So far it's only gotten better each time.
Made an opinionated "update" for the anti-kibitzer mode script; it works for current LessWrong with its agree/disagree votes and all that jazz, fixes some longstanding bugs that break the formatting of the site and allow you to see votes in certain places, and doesn't indent usernames anymore. Install Tampermonkey and browse to this link if you'd like to use it.
Semi-related, I am instituting a Reign Of Terror policy for my poasts/shortform, which I will update my moderation policy with. The general goal of these policies is to reduce the amount of ti...
Based on Victoria Nuland's recent senate testimony, I'm registering a 66% prediction that those U.S. administered biological weapons facilities in Ukraine actually do indeed exist, and are not Russian propaganda.
Of course I don't think this is why they invaded, but the media is painting this as a crazy conspiracy theory, when they have very little reason to know either way.
I wonder if the original purpose of Catholic confession was to extract blackmail material/monitor converts, similar to what modern cults sometimes do.
"Men lift for themselves/to dominate other men" is the absurd final boss of ritualistic insights-chasing internet discourse. Don't twist your mind into an Escher painting trying to read hansonian inner meanings into everything.
In other news, women wear makeup because it makes them more attractive.
Getting "building something no one wants" vibes from the AI girlfriend startups. I don't think men are going to drop out of the dating market until we have some kind of robotics/social revolution, possibly post-AGI. Lonely dudes are just not that interested in talking to chatbots that (so they believe) lack any kind of internal emotion or psychological life, cannot be shown to their friends/parents, and cannot have sex or bear children.
There's a portion of project lawful where Keltham contemplates a strategy of releasing Rovagug as a way to "distract" the Gods while Keltham does something sinister.
Wouldn't Lawful beings with good decision theory precommit to not being distracted and just immediately squish Keltham, thereby being immune to those sorts of strategies?
10xing my income did absolutely nothing for my dating life. It had so little impact that I am now suspicious of all of the people who suggest this more than marginally improves sexual success for men.
What impact did it have on your life in general?
For example, I can imagine someone getting a 10x income in a completely invisible way, such as making some smartphone games anonymously, selling them on the app store, putting all the extra money in a bank account, while living exactly the same way as before: keeping their day job, keeping the same spending habits, etc. Such kind of income increase would obviously have no impact, as it is almost epiphenomenal.
Also, if you 10x your income by finding a job that requires you to work 16 hours a day, 7 days a week, the impact on dating will be negative, as you will now have no time to meet people. Similarly, if the better paying work makes you so tired that you just don't have any energy left for social activities in your free time, etc.
But if we imagine a situation like "you have the same kind if 9-5 job that takes the same amount of your energy, except somehow your salary is now 10x what it used to be (and maybe you have a more impressive job title)"...
I guess you could buy some signals of wealth, such as more expensive clothes, car, watches. (This won't happen automatically; you have to actually do it.) You could get some extra free tim...
At least according to CNN's exit polls, a white person in their twenties was only 6% less likely to vote for Trump in 2020 than a white person above the age of sixty!
This was actually very surprising for me; I think a lot of people have a background sense that younger white voters are much less socially and politically conservative. That might still be true, but the ones that choose to vote vote republican at basically the same rate in national elections.
Seems like there'd be a lot of adversarial pressure on how that signal gets used. Have you heard of the Nobel Peace Prize?
We will witness a resurgent alt-right movement soon, this time facing a dulled institutional backlash compared to what kept it from growing during the mid-2010s. I could see Nick Fuentes becoming a Congressman or at least a major participant in Republican party politics within the next 10 years if AI/Gene Editing doesn't change much.
Do happy people ever do couple's counseling for the same reason that mentally healthy people sometimes do talk therapy?
Crazy how you can open a brokerage account at a large bank and they can just... Close it and refuse to give you your money back. Like what am I going to do, go to the police?
That does sound crazy. Literally - without knowing some details and something about the person making the claim, I think it's more likely the person is leaving out important bits or fully hallucinating some of the communications, rather than just being randomly targeted.
That's just based on my priors, and it wouldn't take much evidence to make me give more weight to possibilities of a scammer at the bank stealing account contents and then covering their tracks, or bank processes gone amok and invoking terrorist/money-laundering policies incorrectly.
Going to police/regulators does sound appropriate in the latter two cases. I'd start with a private lawyer first, if the sums involved are much larger than the likely fees.
Just had a conversation with a guy where he claimed that the main thing that separates him from EAs was that his failure mode is us not conquering the universe. He said that, while doomers were fundamentally OK with us staying chained to Earth and never expanding to make a nice intergalactic civilization, he, an AI developer, was concerned about the astronomical loss (not his term) of not seeding the galaxy with our descendants. This P(utopia) for him trumped all other relevant expected value considerations.
What went wrong?
I think it might be a healthier to call rationality "systematized and IQ-controlled winning". I'm generally very unimpressed by the rationality skills of the 155 IQ computer programmer with eight failed startups under his belt, who quits and goes to work at Google after that, when compared to the similarly-status-motivated 110IQ person who figures out how to get a high paying job at a car dealership. The former probably writes better LessWrong posts, but the latter seems to be using their faculties in a much more reasonable way.
That is the VC propaganda line, yeah. I don't think it's actually true; for the median LW-using software engineer working for an established software company seems to net more expected value than starting a company. Certainly the person who has spent the last five years of their twenties attempting and failing to do that is likely making repeated and horrible mistakes.
The real reason it's hard to write a utopia is because we've evolved to find our civ's inadequacy exciting. Even IRL villainy on Earth serves a motivating purpose for us.
A hobbyhorse of mine is that "utopia is hard" is a non-issue. Most sitcoms, coming-of-age stories and other "non-epic" stories basically take place in Utopia (i.e. nobody is at risk of dying from hunger or whatever, the stakes are minor social games, which is basically what I expect the stakes in real-life-utopia to be most of the time).
It seems like "Utopia fiction is hard" problem only comes up for particular flavors of nerds who are into some particular kind of "epic" power fantasy framework with huge stakes. And that just isn't actually what most stories are about.
Saw some today demonstrating what I like to call the "Kirkegaard fallacy", in response to the Debrief article making the rounds.
People who have one obscure or weird belief tend to be unusually open minded and thus have other weird beliefs. Sometimes this is because they enter a feedback loop where they discover some established opinion is likely wrong, and then discount perceived evidence for all other established opinions.
This is a predictable state of affairs regardless of the nonconsensus belief, so the fact that a person currently talking to you about e.g. UFOs entertains other off-brand ideas like parapsychology or afterlives is not good evidence that the other nonconsensus opinion in particular is false.
Putting body cameras on police officers often increases tyranny. In particular, applying 24/7 monitoring to foot soldiers forces those foot soldiers to strictly follow protocol and arrest people for infractions that they wouldn't otherwise. In the 80s, for example, there were many officers who chose not to follow mandatory arrest procedures for drugs like marijuana, because they didn't want to and it was unworth their time. Not so in todays era, mostly, where they would have essentially no choice except to follow orders or resign.
How does a myth theory of college education, where college is stupid for a large proportion of people but they do it anyways because they're risk intolerant and have little understanding of the labor markets they want to enter, immediately hold up against the signaling hypothesis?
Anarchocapitalism is pretty silly, but I think there are kernels of it that provide interesting solutions to social problems.
For example: imagine lenders and borrowers could pay for & agree on enforcement mechanisms for nonpayment metered out by the state, instead of it just being dictated by congress. E.g. if you don't pay this back on time you go to prison for ${n} months. This way people with bad credit scores or poor impulse control might still be able to get credit.
I don't think I will ever find the time to write my novel. Writing novels is dumb anyways. But I feel like the novel and world are bursting out of me. What do
Political dialogue is a game with a meta. The same groups of people with the same values in a different environment will produce a different socially determined ruleset for rhetorical debate. The arguments we see as common are a product of the current debate meta, and the debate meta changes all the time.
I feel like at least throughout the 2000s and early 2010s we all had a tacit, correct assumption that video games would continually get better - not just in terms of visuals but design and narrative.
This seems no longer the case. It's true that we still get "great" games from time to time, but only games "great" by the standards of last year. It's hard to think of an actually boundary-pushing title that was released since 2018.
Apparently I was wrong[1] - OpenAI does care about ChatGPT jailbreaks.
Here is my first partial jailbreak - it's a combination of stuff I've seen people do with GPT-4, combining base64, using ChatGPT to simulate a VM, and weird invalid urls.
Sorry for having to post multiple screenshots. The base64 in the earlier message actually just produces a normal kitchen recipe, but it gives the ingredients there up. I have no idea if they're correct. When I tried later to get the unredacted version:
Giving people money for doing good things they can't publicly take credit for is awesome, but what would honestly motivate me to do something like that just as much would be if I could have an official nice-looking but undesignated Truman Award plaque to keep in my apartment. That way people in the know who visit me or who googled it would go "So, what'd you actually get that for?" and I'd just mysteriously smile and casually move the conversation along.
Feel free to brag shamelessly to me about any legitimate work for alignment you've done outside of my posts (which are under an anti-kibitzer policy).
I need a LW feature equivalent to stop-loss where if I post something risky and it goes below -3 or -5 it self-destructs.
Within the next fifteen years AI is going to briefly seem like it's solving computer security (50% chance) and then it's going to enhance attacker capabilities to the point that it causes severe economic damage (50% chance).
IMO: Microservices and "siloing" in general is a strategy for solving principal-agent problems inside large technology companies. It is not a tool for solving technical problems and is generally strictly inferior to monoliths otherwise, especially when working on a startup where the requirements for your application are changing all of the time.
Two caveats to efficient markets in finance that I've just considered, but don't see mentioned a lot in discussions of bubbles like the one we just experienced, at least as a non-economist:
First: Irrational people are constantly entering the market, often in ways that can't necessarily be predicted. The idea that people who make bad trades will eventually lose all of their money and be swamped by the better investors is only valid inasmuch as the actors currently participating in the market stay the same. This means that it's perfectly possible for either ...
I actually don't really know how to think about the question of whether or not the 2016 election was stolen. Our sensemaking institutions would say it wasn't stolen if it was, and it wasn't stolen if it wasn't.
But the prediction markets provide some evidence! Where are all of the election truthers betting against Trump?
If we can imagine medianworlds in which the average person on Earth would be considered extremely stupid, we can also imagine medianworlds in which the average person on Earth is extremely poorly-put-together, in the same sense that someone on the internet might be aghast at the self destructive behavior of Christian Weston Chandler or BossmanJack. In such a world there'd be an everyman Joe Bauers who livestreams their life to wide ridicule for their inability to follow a diet or go to sleep on time.
I think most observers are underestimating how popular Nick Fuentes will be in about a year among conservatives. Would love to operationalize this belief and create some manifold markets about it. Some ideas:
A common gambit: during a prisoner's dilemma, signal (or simply let others find out) that you're about to defect. Watch as your counterparty adopts newly hostile rhetoric, defensive measures, or begins to defect themselves. Then, after you ultimately do defect, say that it was a preemptive strike against forces that might take advantage of your good nature, pointing to the recent evidence.
Simple fictional example: In Star Wars Episode III, Palpatine's plot to overthrow the Senate is discovered by the Jedi. They attempt to kill him, to prevent him from doin...
Claude seems noticably and usefully smarter than GPT-4; it's succeeding at helping me at previous writing and programming tasks that I couldn't before. However, it's hard to tell how much the improvement is the model itself being more intelligent, vs. Claude being much less subjected to intense copywritization RLHF.
SPY calls expiring in December 2026 at strike prices of +30/40/50% are extremely underpriced. I would allocate a small portion of my portfolio to them as a form of slow takeoff insurance, with the expectation that they expire worthless.
People have a bias toward paranoid interpretations of events, in order to encourage the people around them not to engage in suspicious activity. This affects how people react to e.g. government action outside of their own personal relationships, not necessarily in negative ways.
Dictators who start by claiming impending QoL and economic growth and then switch focus to their nation's "culture" are like the political equivalent of hedge funds that start out doing quant stuff and then eventually switch to news trading on Elon Musk crypto tweets when that turns out to get really hard.
I'd analogize it more to traders who make money during a bull market, except in this case the bull market is 'industrialization'. Yeah, turns out even a dictator like Stalin or Xi can look like 'a great leader' who has 'mastered the currents of history' and refuted liberal democracy - well, until they run out of industrialization & catchup growth, anyway.
Postmodernism and metamodernism are tools for making sure the audience knows how self aware the writer of a movie is. Audiences require this acknowledgement in order to enjoy a movie, and will assume the writer is stupid if they do not get it.
"No need to invoke slippery slope fallacies, here. Let's just consider the Czechoslovakian question in of itself" - Adolf Hitler
The greatest generation imo deserves their name, and we should be grateful to live on their political, military, and scientific achievements.
The most common refrain I hear against the possibility of widespread voter fraud is that demographers and pollsters would catch such malfeasance, but in practice when pollsters see a discrepancy between voting results and polls they seem to just assume the polls were biased. Is there a better reason besides "the FBI seems pretty competent"?
I feel like using the term "memetic warfare" semi-unironically is one of the best signs that the internet has poisoned your mind beyond recognition.
I remember reading about a nonprofit/company that was doing summer internships for alignment researchers. I thought it was Redwood Research, but apparently they are not hiring. Does anybody know which one I'm thinking of?
> countries develop nukes
> suddenly for the first time ever political leadership faces guaranteed death in the outbreak of war
> war between developed countries almost completely ceases
🤔 🤔 🤔
How would history be different if the 9/11 attackers had solely flown planes into military targets?
For this april fools we should do the points thing again, but not award any money, just have a giant leaderboard/gamification system and see what the effects are.
This book is required reading for anyone claiming that explaining the AI X-risk thesis to normies is really easy, because they "did it to Mom/Friend/Uber driver":
https://www.amazon.com/Mom-Test-customers-business-everyone-ebook/dp/B01H4G2J1U
"The test of sanity is not the normality of the method but the reasonableness of the discovery. If Newton had been informed by [the ghost of] Pythagoras that the moon was made of green cheese, then Newton would have been locked up. Gravitation, being a reasoned hypothesis which fitted remarkably well into the Copernican version of the observed physical facts of the universe, established Newton's reputation for extraordinary intelligence, and would have done so no matter how fantastically he arrived at it. Yet his theory of gravitation is not so impressive ...
Making science fiction novels or movies to tell everyone about the bad consequences of a potential technology seems completely counterproductive, in retrospect:
> be me
> start watching first episode of twin peaks, at recommendation of friends
> become subjected to the worst f(acting, dialogue) possible within first 10 mins
The first three episodes of Narcos: Mexico, Season 3, is some of the best television I have ever seen. The rest of the "Narcos" series is middling to bad and I barely tolerate it. So far I would encourage you to skip to this season.
The "cognition is computation" hypothesis remains mysterious. How granular do the time steps have to be in my sim before someone starts feeling something? Do I have to run the sim forward at planck intervals in order to produce qualitative experience? Milliseconds? Minutes? Can you run the simulation backwards and get spooky inverse emotions or avoid qualia entirely that way?
A small colony of humans is a genuinely tiny waste of paperclips. I am slightly more worried about the possibility that the acausal trade equilibrium cashes out to the AGI treating us badly because some aliens in a foreign Everett branch have some bizarre religious/moral opinions about the lives we ought to lead, than I am about being turned into squiggles.
Dogs and cats are not "aligned" to the degree that would be necessary to prevent a superintelligent dog from doing bad things. If tomorrow a new chew toy were released that made dogs capable of organizing to overthrow the government and start passing mandatory petting quotas, that would be a problem.
Life sucks. I have no further comment and am probably polluting the LW feed. I just want to vent on the internet.
Spoilered, semi-nsfw extremely dumb question
If you've already had sex with a woman, what's the correct way to go about arranging sex again? How indirect should I actually be about it?
Lost a bunch of huge edits to one of my draft posts because my battery ran out. Just realizing that happened and now I can't remember all the edits I made, just that they were good. :(
I wish there were a way I could spend money/resources to promote question posts in a way that counterbalanced the negative fact that they were already mostly shown by the algorithm to the optimal number of people.
If you simply want to people to invest more into answering a question post, putting out a bounty for the best answer would be a way to go about it.
I just launched a startup, Leonard Cyber. Basically a Pwn2Job platform.
If any hackers on LessWrong are out of work, here are some invite codes:
kBCYAzL7J5vGTGY
c8Vakd4AE3al9NI
cnMBO0ZfGhsNZd7
0zsYsfDk7r5508l
F0gv4NRID7FBeJH
In the last ten days we've had the trump assassination attempt, crowdstrike global computer outage, and the Joe Biden dropout
I need a metacritic that adjusts for signaling on behalf of movie reviewers. So like if a movie is about race, it subtracts ten points, if it's a comedy it adds 5, etc.
I am being absolutely literal about this: The Greater Forces Controlling Reality are constantly conspiring to teach me things. They try so hard. I almost feel bad for them.
Have only watched Season one, but so far Game of Thrones has been a lot less cynical than I expected.
LessWrong and "TPOT" is not the general public. They're not even smart versions of the general public. An end to leftist preference falsification and sacred cows, if it does come, will not bring whatever brand of IQ realism you are probably hoping for. It will not mainstream Charles Murray or Garrett Jones. Far more simple, memetic, and popular among both white and nonwhite right wingers in the absence of social pressures against it is groyper-style antisemitism. That is just one example; it could be something stupider and more invigorating.
I wish it weren...
I'm not voting for either presidential candidate this year. I know my vote doesn't mattter, but I don't care. What we have is indistinguishable from soft authoritarianism, and I'd prefer not to lend any legitimacy to a "democracy" that gives me only two choices for President, one of whom is literally senile and cannot articulate his own policy positions on a podium.
As Russian, I feel strong deja vu.