If I make a comment, then the author deletes the post, is my comment lost as well? I'm pretty sure I don't actually need a record of things I've said, and that my comments aren't valuable enough to justify extra effort, but it kind of bugs me.
Has anyone written the scraper or script to save a copy of everything you write on LW, so it's available even if it gets deleted later? If it archived the post and comment thread you're responding to, it would be all right.
I love markdown when it works, and it drives me nuts when it doesn't. All the docs and cheat sheets in the world don't help.
Would it be possible to add a "view source" option for all posts and comments (not just mine), so I learn from others rather than re-discovering my own mistakes over and over?
I wish I could agree-and-downvote (or agree-and-not-vote) on posts. There are a fair few posts that are well-reasoned and probably correct, but just not important or interesting.
I wish people would be more careful about reinforcing attribution error. We all (including me) have a tendency to talk about types of people rather than behaviors.
I wish we talked about mistake theory rather than mistake theorists. I wish we talked about ideas or discussions at a given simulacra level, rather than people operating at that level. Recently, I wish it were a discussion about "empathetic oofs" rather than "empathetic oofers".
I find it offputting, and it makes it harder for me to accept the underlying thesis, when such behaviors are attributed to immutable personality types. More importantly, I think it constrains our analysis very severely not to recognize that these are valid (or at least common) strategies, and we should strive to use both when appropriate.
I'm a bit uncomfortable with [edit: wrong word. all discussion is good, but I think there's a big modeling error being made.] a lot of the discussion about ethics/morals around here, in that it's often treated as a separate, incomparable domain to other motives/values/utility sources.
It may be, in humans and/or in AI, that there are fairly distinct modules for different domains, which somehow bid/negotiate for which of them influences a given decision by how much. This would make the "morals as distinct evaluations" approach reasonable, though the actual negotiation and power weights among modules seem more important to study.
But if something approaches VNM-rationality, it acts as if it has a unified consistent utility function. Which means an integrated set of values and motivations, not separate models of "what's the right action in this context for me".
So, either AI is irrational (like humans), or exploration of morality as a separate topic from status, pleasure, family, or other non-altruistic drives is not very applicable.
It's fascinating to me to see people changing their recommended meta-ethics due to a very public bad actor and related financial meltdown. How does the FTX meltdown and fraud change your fundamental priorities OR your mechanisms for evaluating actions within those priorities?
Sure, adding some humility to the evaluation function is wise. Recognize that things probably aren't as linear as you think (and certainly not as precise), when you get toward very large or small numbers. Update to focus on nearer-term, safer improvements. And reducing your trust (for those of you who had it) in the assumption that rich people who say comforting words are actually aligned in any useful way is ... so obvious that I'm sad to have to say it.
But if you have believed for years that you can aggregate and compare future population utility against current experiences, and predict what actions will have large impacts on that future, it's really odd if you give that up based on things that happened recently.
[ note: I am not utilitarian, but I am consequentialist. I never made the assumption that SBF was a pure (or even net) good, nor that his methods were likely to sustain. But even if I were surprised by this, I don't see how it's new evidence about something so fundamental. ]
EA folks: is there any donation I can make to increase the expected number of COVID-19 vaccinations given by end of February, either worldwide or for a specific region?
Reminder to self: when posting on utiltarianism-related topics, include the disclaimer that I am a consequentialist but not a utilitarian. I don't believe there is an objective, or even outside-the-individual perspective to valuation or population aggregation.
Value is relative, and the evaluation of a universe-state can and will be different for different agents. There is no non-indexical utility, and each agent models the value of other agents' preferences idiosyncratically.
Strong anti-realism here. And yet it's fun to play with math and devise systems that mostly match my personal intuitions, so I can't stay away from those topics.
There's a large class of social/cultural behaviors that I think are fine and good when they're somewhat rare, and I find annoying and then harmful when it becomes very common.
This includes public demonstrations, minor mostly-victimless crimes, many kinds of drug use, tipping expectations, and I've just realized that April Fools is now in this category.
It's amazing when there are a few surprising well-crafted jokes published. It's annoying when it's common enough that it's hard to find the good ones, and even harder to find the non-joke communications.
</rant>
Making the rounds.
User: When should we expect AI to take over?
ChatGPT: 10
User: 10? 10 what?
ChatGPT: 9
ChatGPT 8
...
There's a lot of similarity between EMH (efficient markets hypothesis) and Aumann's agreement theorem.
Types and Degrees of Maze-like situations
Zvi has been writing about topics inspired by the book Moral Mazes, focused on a subset of large corporations where politics rather than productivity are seen as the primary competition dimension and life focus for participants. It's not clear how universal this is, nor what size and competitive thresholds might trigger it. This is just a list of other Maze-like situations that may share some attributes with those, and may have some shared causes, and I hope solutions.
My daily karma tracker showed that 3 comments got downvoted. Honestly, probably justified - they were pretty low-value. No worries, but it got me thinking:
Can I find out how I'm voting (how many positive and negative karma I've given) over time periods? Can I find it out for others? I'd love it if I could distinguish a downvote from someone who rarely downvotes from one from a serial downvoter.
I've been downvoting less, recently, having realized how little signal is there, and how discouraging even small amounts can be. Silence or comment are the better response, for anything better than pure drivel.
Should I assign a significant chance that the US will actually default on some debts? Betting markets are at 6%, but they're pretty bad at the tails, for various reasons. CDS spreads (people buying insurance for/against a default) are 172bp, implying a 1.72% chance with a fair bit of real money wagered. It's also likely somewhat skewed, for regulatory and technical reasons, but less than the betting markets.
This wouldn't be an existential threat, but it's definitely a stability and lifestyle threat for a lot of current and future humans, and likely exacerbates other threats.
Not sure I can do much about it, but it's on my mind.
Here's a prediction I haven't seen anywhere else: due to border closures from COVID-19 and Brexit, the value of dual-citizenship is going up (or at least being made visible). This will lead to an uptick in mixed-nationality marriages.
"One of the big lessons of market design is that participants have big strategy sets, so that many kinds of rules can be bent without being broken. That is, there are lots of unanticipated behaviors that may be undesirable, but it's hard to write rules that cover all of them. (It's not only contracts that are incomplete...) " -- Al Roth
I think this summarizes my concerns with some of the recent discussions of rules and norm enforcement. People are complicated and the decision space is much larger than usually envisioned when talking ab...
I'm pretty sure https://en.wikipedia.org/wiki/Seeing_Like_a_State has bigger lessons than just government or large corporations.
A general problem with optimization and control (in AI and commerce and everything else) is that it limits breadth of activity in (at least) two ways. The obvious way is that it avoids things which oppose the stated goals. That's arguably a good and intentional limit.
But it ALSO avoids things that are illegible or unexpected and hard to quantify. I suspect that's a large majority of experiential value for a lar...
It's worth remembering, in discussions about decision theory, deontology vs consequentialism, and related topics, that ALL of our actual examples of agents are MISTAKEN or DECEPTIVE about the causes of their actions.
There are no consistent VNM-rational people.
I give some probability space to being a Boltzmann-like simulation. It's possible that I exist only for an instant, experience one quantum of input/output, and then am destroyed (presumably after the extra-universal simulators have measured something about the simulation).
This is the most minimal form of Solipsism that I have been configured to conceive. It's also a fun variation of MWI (though not actually connected logically) if it's the case that the simulators are running multiple parallel copies of any given instant, with slightly different configurations and inputs.
Georgism isn't taxation, it's public property ownership.
It finally clicked in my mind why I've reacted negatively to the discussion about Georgist land taxes. It's the same reaction I have to ANY 100% tax rate on anything. That's not what "ownership" means.
Honestly, I'd probably be OK with (or at least I'd be able to discuss it more reasonably) the idea that land is not privately owned, ever. Only rented/leased from the government, revocable for failure to pay the current rate (equivalent in all ways to the Georgist tax amount, an...
I sometimes think about Neanderthal tribes, and what they thought about alignment with Homo Sapiens (in truth, they didn't; there wasn't that much interaction and the takeover was over a longer timeframe than any individual of either species could imagine. but I still think about it).
I wonder if we could learn anything from doing a post-mortem (literally) from their viewpoint, to identify anything they could have done differently to make coexistence or long-term dominance over the newer species more likely.
Al Roth interview discussing the historical academic path from simplistic game theory to market design, which covers interesting mixes of games with multiple dimensions of payoff. https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.33.3.118
I wonder if the general consensus that current LLMs aren't conscious enough to be moral patients should change my assumption that most humans are.
We should probably generalize cryptocurrency mechanisms to "proof of waste". Fundamentally, the concept of a blockchain is that real-world resources are used for artifically-complex calculations just to show that one doesn't need those resources for something else.
So, has the NYT had any reaction, response, or indication that they're even considering the issue of publicizing private details of a pseudonymous author? Do we know when the article was planned for publication?
Unrelatedly, on another topic altogether, are there any new-and-upcoming blogs about topics of rationality, psychiatry, statistics, and general smart-ness, written by someone who has no prior reputation or ties to anyone, which I should be watching?
Incentives vs agency - is this an attribution fallacy (and if so, in what direction)?
Most of the time, when I see people discussing incentives about LW participation (karma, voting, comment quality and tone), we're discussing average or other-person incentives, not our own. When we talk about our own reasons for participation, it's usually more nuanced and tied to truth-seeking and cruxing, rather than point-scoring.
I don't think you can create alignment or creative cooperation with incentives. You may be able to encourage it, and you ca...
Comment throttling seems to catch people offguard somewhat often. I suspect this is because the criteria are imperfect, but also because it's a VERY sharp change from "unlimited" to "quite small".
Perhaps there should be a default throttle of 5 or 8 comments per day. And this throttle tightens with net downvotes. Maybe increase it with recent net upvotes, or maybe only by an explicit exception request with an explanation of why one wants to make more than that in a day.
edit: oh, better - do away with the daily granularity. Everyone h...
Are human megastructures (capitalism, large corporations, governments, religious organizations) forms of AI?
They're alien and difficult to understand. Hard to tell if they have their own motives, or if they're reflecting some extracted set of beliefs from humans. They make leaps of action that individual humans can't. They are at best semi-corrigible. Sounds very similar to LLMs to me.
edit: I know this isn't a new thought, but it's becoming more attractive to me as I think about more similarities. Groups do tend to hallucinate, both...
I increasingly think EA is just Objectivism for progressives.
From https://gideons.substack.com/p/altruism-wrap, which I found through Zvi's writeup. I hadn't heard it put quite this way before, but the "shut up and multiply" mindset (at least with made-up coefficients) really does lead to prioritizing far-mode ideals and shaky projections over actual real living people.
It's weird that EA lost credibility (with many, at least partly including me) when it drifted away from mosquito nets as the thing it was mocked for talking about.
Mastodon - is there any primer or description of the theory of federation behind it? What decisions about content or discovery are made at the server level? How do I simultaneously have access to everything federated from every server while getting the advantages of having picked a server with my preferred norms/enforcement?
specifically:
Hmm. I tend to frame deontololgical moral/decision frameworks as evolved heuristics from consequentialism, with lost purposes as to how they came about, and some loss of future-optimization flexibility from that, but also some serious advantages from that in that it reduces computability paralysis AND motivated cognition to justify worse behaviors. So, not "correct", but "performs better than correct for many parts of normal life".
The recent discussion of acausal human decisions (which I think is incorrect) has made me wonder - is deontology a ...
https://marginalrevolution.com/marginalrevolution/2021/12/hunting-smaller-animals-is-this-also-a-theory-of-early-economic-growth.html?utm_source=rss&utm_medium=rss&utm_campaign=hunting-smaller-animals-is-this-also-a-theory-of-early-economic-growth may explain a shift from stag-hunting to rabbits. It's not a loss of cooperation, we killed all the stags.
Best few paragraphs I've read recently:
...Last week I compared GameStop to Bitcoin. The thing about Bitcoin as a financial asset, I wrote, is that “there is no underlying claim; there is just a widespread acknowledgment that people think it’s valuable.” I suggested that that was a fascinating and powerful innovation,[3] but that once you are accustomed to it you might get a little overconfident and think that any financial asset—GameStop stock, say—could work the same way. People will buy it because they think it will retain value because people wil
I always enjoy convoluted Omega situations, but I don't understand how these theoretical entities get to the point where their priors are as stated (and especially the meta-priors about how they should frame the decision problem).
Before the start of the game, Omega has some prior distribution of the Agent's beliefs and update mechanisms. And the Agent has some distribution of beliefs about Omega's predictive power over situations where the Agent "feels like" it has a choice. What experiences cause Omega to update sufficiently to ev...
Funny thought - I wonder if people were created (simulated/evolved/whatever) with a reward/utility function that prefers not to know their reward/utility function.
Is the common ugh field around quantifying our motivation (whether anti-economic sentiment, or just punishing those who explain the reasoning between unpleasant tradeoffs) a mechanism to keep us from goodhearting ourselves?
I'm torn between two models of recent technological change. In some sense, the internet from 1992 (hypertext is a cool organizing principle) to early '00s was a transformative societal event, and it arguably destroyed civilization. It's not clear to me whether AI is just the final visible piece of our self-destruction, or if it's a brand new thing, hitting us after we were so weakened by the massive changes previously experienced.
I guess I could assign the breaking point still earlier - mass communication and easy air travel are what really broke through the isolated and understandable lives of most people.
Considering whether to set up (and when to use) an alt account. As LW starts to tolerate more controversial topics, and as the world is getting better at searching and holding people massively over-accountable for things that can be intentionally misconstrued, I find myself self-censoring some of my opinions on those topics.
Perhaps I should just ignore those topics, but I kind of like the interaction with smart people, even on topics I'd rather not be publicly discussing. Perhaps I should make a new throwaway each time (or every quarter, say). ...
Huh, I can actually put content in a shortform post. I wonder what it does.