All of FiftyTwo's Comments + Replies

The setting for Planecrash is called "Glowlarion" sometimes and is shared with a lot of other glowfics, and makes some systematic changes from the original Paizo canon, mostly in terms of making it more similar to real life history of the same period, more internally coherent and with the gods and metaphysics being more impactful. 

There's a brief outline of some of the changes here: https://docs.google.com/document/d/1ZGaV1suMeHrDlsYovZbG4c4tdMVgdq0HzgRX0HUGYkU/edit?tab=t.0 

These are the two oldest non-crossover threads I can find: https://www.gl... (read more)

Was rereading a bunch of early 2010s LW recently, prompted by getting a reply on one of my old comments, and its definitely weird. But the flavor of weird feels different somehow? A lot more earnest and direct, and with people more willing to make silly jokes and tangents. 

There were also more top level posts along the lines of "Here's this new rationality technique I've been trying, what do people think?" It feels less, high context, I guess? A lot of current discussion is people immersed in some wider meta debate with long established sides and real world stakes to it. 

I imagine that kind of posting wouldn't work particularly well these days given that the environment around it has changed. 

Oh, the type of weirdness has definitely changed a lot. But I'm just contending that the level of deviancy is a lot lower these days.

You go to a LW meetup now and there's a lot of wealthy, well-scrubbed/dressed AI researchers (they even lift) and academics and executives and bright-eyed Stanford undergrads sniffing for an internship or YC application fodder. One famous wealthy guy is manic, because he's hypomanic & bipolar is overrepresented among entrepreneurs; don't worry, he'll be fine, until after the meetup when he disappears for a few months. Nob... (read more)

I feel like you're conflating two different levels, the discourse in wider global society and within a specific community. 

I doubt you'd find anyone here who would disagree that actions by big companies that obscure the truth are bad. But they're not the ones arguing on these forums or reading this post. Vegans have a significant presence in EA spaces so should be contributing to those productively and promoting good epistemic norms. What the lobbying team of Big Meat Co. does has no impact on that. 

Also in general I'm leery of any argument of the form "the other side does as bad or worse so its okay for us to do so" given history. 

I somewhat agree with this but I think its an uncharitable framing of the point, since virtue signalling is generally used for insincerity. My impression is that the vegans I've spoken with are mostly acting sincerely based on their moral premises, but those are not ones I share. If you sincerely believe that a vast atrocity is taking place that society is ignoring then a strident emotional reaction is understandable. 

0Roko
Virtue signalling can be sincere.

I've definitely noticed a shift in the time I've been involved or aware of EA. In the early 2010s it was mostly focused on global poverty and the general idea of evidence based charity, and veganism was peripheral. Now it seems like a lot of groups are mainly about veganism, and very resistant to people who think otherwise. And as veganism is a minority position that is going to put off people who would otherwise be interested in EA

You still run into the alignment problem of ensuring that the upgraded version of you aligns with your values, or some extension of them. If my uploaded transhuman self decides to turn the world into paperclips that's just as bad as if a non-human AGI does. 

Never really got anywhere. Its long enough ago that I don't really remember why, but think I generally found it unengaging. Have periodically tried to teach myself programming through different methods since then but none have stuck. This probably speaks to the difficulty of learning new skills when you have limited time/energy resources, and no specific motivation, more than anything else. (Have had similar difficulties with language learning, but got past them due to short term practical benefits, and devoting specific time to the task). 

It mixes the personal and professional level

Possibly reflective of a wider issue in EA/rationalist spaces where the two are often not very clearly delineated. In that sense EA is more like hobby/fandom communities than professional ones. 

6Viliam
In my opinion, the more money (or other resources, or power) flows through someone's hands, the less excuse that person has for saying "hey, this is just a hobby, we do not want the boring professional norms". Hobby is when you do it in your free time, and if someone does not respect your boundaries, you can easily find a different hobby. If it is your only or major source of income, when denying "consent" might mean losing your income, it is de facto a job. And if someone believes that the people in positions of power who get lots of "consent" from their underlings are evaluating this situation impartially... I may have a bridge to sell you.
1M. Y. Zuo
This is an interesting point. And the pitfalls become obvious when put in that context. 

Saying that people would be better off taking more risks under a particular model elides the question of why they don't take those risks to begin with, and how we can change that. If its desirable to do so. 

The psychological impact of a loss of x is generally higher than that of a corresponding gain. So if I know I will feel worse from losing $10 than I will feel good from gaining $100, then its entirely rational in my utility function to not take a 50/50 bet between those two outcomes.  Maybe I would be better off overall if I didn't over weight losses, but utility functions aren't easily rewritable by humans. The closest you could come is some kind of exposure therapy to losses. 

1JacobW38
Manipulating one's own utility functions is supposed to be hard? That would be news to me. I've never found it problematic, once I've either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I've learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?

Also, we have a huge amount of mental architecture devoted to understanding and remembering spatial relationships of objects (for obvious evolutionary reasons). Using that as a metaphor for purely abstract things allows us to take advantage of that mental architecture to make other tasks easier.

A very structured version of this would be something like a memory palace where you assign ideas to specific locations in a place, but I think we are doing the same thing often when we talk about ideas in spatial relationships, and build loose mental models of them as existing in spatial relationship to one another (or at least I do). 

I think the core thing here is same-sidedness.

The converse of this is that the maximally charitable approach can be harmful when the interlocutor is fundamentally not on the same side as you, in trying to honestly discuss a topic and arrive at truth. I've seen people tie themselves in knots when trying to apply the principle of charity, when the most parsimonious explanation is that the other side is not engaging in good faith, and shouldn't be treated as such. 

It's taken me a long time to internalise this, because my instinct is to take what people s... (read more)

Thanks. This is the kind of content I originally came to LW for a decade ago, but seems to have become less popular

You might find Origins Of Political Order interesting. Emphasis on how the principle agent problem is one of the central issues of governance and how without strong mechanisms systems tend to descend into corruption

Is there any way of reverse engineering from these pictures what existing images were used to generate them? Would be interesting to see how much similarity there is. 

So we just need to get two superpowers who currently feel they are in a zero sum competition with each other to stop trying to advance in an area that gives them a potentially infinite advantage? Seems a very classic case of the kind of coordination problems that are difficult to solve, with high rewards for defecting.

We have, partially managed to do this for nuclear and biological weapons. But only with a massive oversight infrastructure that doesn't exist for AI. And relying on physical evidence and materials control that doesn't exist for AI. It's not i... (read more)

1Jeff Rose
If we do as well with preventing AGI as we have with nuclear non-proliferation, we fail.   And, nuclear non-proliferation has been  more effective than some other regimes (chemical weapons, drugs, trade in endangered animals, carbon emissions, etc.).  In addition, because of the need for relatively scarce elements, control over nuclear weapons is easier than control over AI.    And, as others have noted the incentives for develpong AI are far stronger than for developing nuclear weapons.
1Not Relevant
But… there isn’t reward for defecting? Like, in a concrete actual sense. The only basis for defection is incomplete information. If people think there is a reward, they’re in some literal sense incorrect, and the truth is ultimately easier to defend. Why not (wisely, concertedly, principledly) defend it? And there are extremely concrete reasons to create that international effort for oversight (e.g. of compute), given convergence on the truth. The justifications, conditioned on the truth, are at least as great if not greater than the nuclear case.

A more charitable interpretation of the same evidence would be that as a public health professional Dr Fauci has a lot of experience with the difficulties of communicating complex and messages and the political tradeoffs that are necessary for effective action. And has judged based on that experience what is most effective to say. Do you have data he doesn't? Or a reason to think his experience in his speciality is inapplicable?

8Benquo
This just seems like a much vaguer way to say the same thing I did. Is there a specific claim I made that you disagree with? As far as I can tell, the function of this kind of vagueness is to avoid weakening the official narrative. Necessarily this also involves being unhelpful to anyone trying to make sense of state-published data, Fauci's public statements, and other official and unofficial propaganda. If we have an implied disagreement, it's about whether one ought to participate in a coverup to support the dominant regime, or try to inform people about how the system works.

Earth does have the global infrastructure

It does? What do you mean? The only thing I can think of is the UN, and recent events don't make it very likely they'd engage in coordinated action on anything.

If you convince the CCP, the US government, and not that many other players that this is really serious, it becomes very difficult to source chips elsewhere.

The CCP and the US government both make their policy decisions based on whatever (a weirdly-sampled subset of) their experts tell them.

Those experts update primarily on their colleagues.

It should not take long, given these pieces and a moderate amount of iteration, to create an agentic system capable of long-term decision-making

That is, to put it mildly, a pretty strong claim, and one I don't think the rest of your post really justifies. Without which it's still just listing a theoretical thing to worry about

1Robi Rahman
One year later, in light of the ChatGPT internet and shell plugins, do you still think this is just a theoretical thing? Should we worry about it yet? The fire alarm sentiment seems to have been entirely warranted, even if the plan proposed in the post wouldn't have been helpful.
7Chris_Leong
I don't know enough to evaluate this post. I don't know if it is correct or not. However, a completely convincing explanation could possibly shorten timelines. So is that satisfying? Not really. But the universe doesn't have to play nice.

You're completely right. If you don't believe it, this post isn't really trying to update you. This is more to serve as a coordination mechanism for the people who do think the rest isn't very difficult (which I am assuming is a not-small-number).

Note that I also don't think the actions advocated by the post are suboptimal even if you only place 3-7 years at 30% probability.

If you're an otherwise healthy young person who has tested positive, what's the best thing you can do to prevent getting long covid? Seems like there's research saying that exercising too early can make it worse, but other articles saying good things about exercise, and I'm not sure how to evaluate it.  

Good programmers who are a pain to work with are much less successful than average programmers who are pleasant to work with. Increasing technical competency has diminishing returns. So I'd focus on doing things that gets you more experience of working with people, the business development internship may do that depending on the details. Also things like working in a bar or restaurant. 

Note that this is distinct from the standard advice on developing social skills. Being good at talking to strangers and going to parties is good. But working well with ... (read more)

I'd be curious what you think now after many years to see the effects of things in practice

4Mass_Driver
I think we're doing a little better than I predicted. Rationalists seem to be somewhat better able than their peers to sift through controversial public health advice, to switch careers (or retire early) when that makes sense, to donate strategically, and to set up physical environments that meet their needs (homes, offices, etc.) even when those environments are a bit unusual. Enough rationalists got into cryptocurrency early enough and heavy enough for that to feel more like successful foresight than a lucky bet. We're doing something at least partly right. That said, if we really did have a craft of reliably identifying and executing better decisions, and if even a hundred people had been practicing that craft for a decade, I would expect to see a lot more obvious results than the ones I actually see. I don't see a strong correlation between the people who spend the most time and energy engaging with the ideas you see on Less Wrong, and the people who are wealthy, or who are professionally successful, or who have happy families, or who are making great art, or who are doing great things for society (with the possible exception of AI safety, and it's very difficult to measure whether working on AI safety is actually doing any real good). If anything, I think the correlation might point the other way -- people who are distressed or unsuccessful at life's ordinary occupations are more likely to immerse themselves in rationalist ideas as an alternate source of meaning and status. There is something actually worth learning here, and there are actually good people here; it's not like I would want to warn anybody away. If you're interested in rationality, I think you should learn about it and talk about it and try to practice it. However, I also think some of us are still exaggerating the likely benefits of doing so. Less Wrong isn't objectively the best community; it's just one of many good communities, and it might be well-suited to your needs and quirks in particul

Great to see everyone. Is there somewhere we can sign up for future events in London?

1neilkakkar
This was floated at the event: https://tinyletter.com/acxlondon

Characterising the reaction to Cummings as about being about people overreacting to a small violation of the rules is misleading. The issue wasn't the initial rule violation, it was that the initial denial and lack of even token punishment was symbolic of a wider issue in the Johnson government with corruption and cronyism. Caring about hypocrisy and corruption among leaders is entirely rational as it is indicative of how they will make other decisions in the future. 

4bfinn
This seems like a post-rationalization. IIRC the way it played out over a number of days was that initially it wasn't clear what the facts were, and hence what if anything Cummings had done wrong (e.g. whether his journey had been legal, or at least justified). And even if he had done something wrong, I heard one pundit point out that as Cummings wasn't a minister or public-facing figure there was no requirement for him to resign or be fired (rather than apologise or be disciplined in some way). But nonetheless the media picture right from the start was that this maverick egg-head weirdo must be guilty of something, even if they weren't sure what exactly. And the public reacted accordingly. For example, 3 days before Cummings' press conference (which IIRC was the first time his side of the story was fully set out) I heard a radio phone-in about what an evil character Cummings must be, in which callers were mostly accusing him of risking his parents' health by going to stay with them. Or saying he must have stopped at a petrol station and so risked people there (he denied this). It later turned out he hadn't even stayed in his parents' house, or had close contact with them, but stayed in another building nearby. So then it was a question of, was his main journey illegal (with much detailed media analysis of the fine points of the law)? Or if not, how about the short trip to Barnard Castle? Which is what most people - the narrative - have now settled on. What this all shows is that in this trial by media, Cummings was presumed guilty from the start; and then it was just a matter of finding some crime to pin on him. And once something was found that seemed enough like one, everyone could congratulate themselves that they'd 'known' all along, and so their outrage had always been justified. (I can't recall which cognitive bias this is - but quite a typical example.) (To avoid doubt, as I turned out I think it's very likely he broke the rules and adjusted his story

Yeah I like a lot of EY's stuff (otherwise I wouldn't be here) but he does have a habit of treating his own preferences as universal, or failing to appreciate when there might be good reasons that the seemingly obvious solution doesn't work, as is common with people commenting on areas outside their expertise

I think its unfair to say "everyone in Europe lost their minds" when the EU health agency was very loudly saying things were fine. It would be more accurate to say a couple of specific countries medical regulators and some politicians went crazy. 

Obviously that's still bad, but when looking at systemic failures like this it is important to identify the actual source of the problem. Which seem to be due to idiosyncratic political issues in teh countries involved. Blaming the wrong people undermines the ones who have been doing a good job

How would you differentiate this from someone just asking for additional evidence because they think you've made a false statement? E.g. If Alice tells Bob the earth is flat, its reasonable for him to ask for additional evidence, and doing so doesn't imply he's playing status games. But could equally reasonably be replied to by saying that Bob is only disagreeing because he thinks Alice isn't high status enough to make cosmological claims.

4jimmy
Good question. I generally wouldn't ask questions like "is his disagreement explained by status alone or by facts alone?". I generally ask questions more like "if he saw the person saying these things as higher or lower 'status', how much would this change his perception of the facts?" (and others, but this is the part of the picture I think is most important to illuminate here). If a fields medalists looks at your proof and says "you're wrong", you're going to respond differently than if a random homeless guy said it because when a fields medalist says it you're more likely to believe that your proof is flawed (and rightly so!). Presumably there's no one you hold in high enough regard that if they were to say "the earth is flat" you'd conclude "it's more likely that I'm wrong about the earth being round and all of the things that tie into that than it is that this person is wrong, so as weird as it is, the earth is probably flat", however even there status concerns change how you respond. Coincidentally, just as I started drafting my response to this I got interrupted to go out to dinner and on the way was told about Newman's energy machine and how it produced more energy than it required, how Big Oil was involved in shutting it down, and the like. This certainly counts as "something I think is false" in the same way Bob thinks "the earth is flat" is false, but how, specifically, does that justify asking for evidence? The case against perpetual motion machines is very solid and this is not what a potentially successful challenge would look like (to put it lightly), so it's not like I need to ask for evidence to make sure I shouldn't be working on perpetual motion machines or something. Since I can't pretend I'd be doing it for my personal learning, what could motivate me to ask? I could ask for evidence because of a sense of ["duty"](http://xkcd.com/386/), but it was clear to me that he wasn't just gonna say "Huh, I guess my evidence is actually incredibly

Like, who has the authority to say "thou shalt not try things that might fail"? As long as you're not conning anybody out of resources, your failure doesn't pick anybody else's pocket.

What about altruistic reasons for asking? If my friend is planning to quit their job and become a famous musician I would probably attempt strongly to dissuade them, even if it wouldn't directly affect me.

If however I thought they were likely to succeed (e.g. have made money selling music on bandcamp and performing, in talks with a record company, etc.) I probably wouldn't dissuade them.

I feel like after reading this I have a much better insight into how Eliezer thinks than I did before, even having read most of his published work.

I think his model of other people is off though.

Specifically, he uses ideas of comparative status to explain other people not challenging conventional wisdom, or trying new things a lot. Which feels like it could be a fully general argument for any observed behaviour (e.g. it could equally well explain a habit of disproportionately challenging experts, as being in conflict with them puts you at their level a... (read more)

Hello! Just rediscovered this thread. The website doesn't seem to be up anymore. How did it go in the end? Where are you at with learning mandarin?

0Lemmih
Site moved to https://clozecards.com/ My attention has mostly been elsewhere but my vocabulary is slowly growing.

Since you seem to be sincere in asking for reasons:

"Whore" is considered an unpleasant word by many people. That combined with the overall tone may have made people think your intention was trollish

You seem to deeply misunderstand the dynamics that lead to ssex eduation being the way it is. There is no plausible transition from the way the world exists at present to one where retired sex workers were employed in the school system to teach sex education.

  • a) Because the majority still have moral objections to sex work and it is illegal in many p

... (read more)
8Error
Thanks for paying the karma toll to answer me. I picked up the usage from a couple of sex workers' blogs. Now that it's brought to my attention, though, I think they were explicitly trying to reclaim the word, which implies there was a problem with it to begin with. I should have caught that before using it in other venues. Guilty on tone if not trollishness. I'll admit I'm seethingly hostile to grade school in general and sex ed/drug ed/anything with the same general characteristics in particular; I consider the latter fundamentally dishonest and an insult to the students. Agreed. I presented the idea because it seemed both good and original; I know it's not politically tenable. The issues you mention are real ones; I just file them both under "people are crazy, the world is mad."

Maybe he's secretly a creationist, its unlikely but it would be more interesting/controversial than he standard internet contrarian ideas.

[This comment is no longer endorsed by its author]Reply
0Lemmih
If there's anything I can do to make your experience better, let me know.

Might be worth including the Amazon.co.uk and other store links.

I want to say no purely because of my default suspicion of anyone offering me a free vacation.

In general collecting data is cheap and we're getting better at sorting and using it, so bias towards collecting data

Also focus on developing skills in areas unlikely to be automated anytime soon

Personally I was surprised so amny cis people strongly identified with their gender

[tpical mind etc...]

All hail!

Do you have any plans for changes?

I've managed to partly transmute my "I want to buy that now" impulse into sending a sample to my kindle. Then if I never get past the first few pages I've not actually spent any money, if I reach the end of the sample and still want to continue I know I'm likely to keep going .

I find I often pickup mindsets and patterns of thought from from reading fiction or first person non-fiction. E.g. I'm a non-sociopath, but I noticed thought patterns more simlar when reading "Confessions of a Sociopath"

I figure this may be a useful way to hack myself towards positive behaviours. Can anyone reccomend fiction that would encourage high productivity mindsets?

2polymathwannabe
I've noticed that watching Herman's Head (you can find most of the episodes on YouTube) helped me model my mind as a dialogue between competing agents.

[Meta] I often see threads like this where people recommend things that require a very high level of conscientiousness or planning ability to start with, (e.g. if you are tired in the mornings get out of bed immediately and do x, requirs you to be capable of forcing yourself to do x when you are tired.)

2Douglas_Knight
I think you are mistaken about what is easy and difficult. Most of these are about dealing with lack of willpower, suggesting that the authors found something that was easier than it looked. Most of them don't compare before and after, but jsteinhardt's does.

Using terms that I picked up here which are not well known, or mean different things in different contexts

Also, I sometimes over pattern match arguments and concepts I've picked up on Lesswrong to other situations, which can result in trying to condescendingly explain something irrelevant.

5Smaug123
I do something similar. I consistently massively underestimate the inferential gaps when I'm talking about these things, and end up spending half an hour talking about tangential stuff the Sequences explain better and faster.

Yeah, I've had people complain about the standard basilisk and weird AI speculation stuff. Also the association with neoreactionaries, sexists and HBD people.

0[anonymous]
Sometimes you get the opposite - LW seen as an SJW forum because Scott Alexander is okay with referring to his partner as ze/zir/zur in his blog and if you are not American, or over 40, or at any rate did not go to a US college in the last 15 years this comes accross as weird. I remember even on Reddit as late as 2009 the "in" "progressive" thing was to hate Bush, not to understand something about transgenderism or feminism or what, so it is a very recent thing, I would say, in mainstream circles.

What incentive does the future AI have to do this once you've already helped it?

0TobyBartels
Well, that's the tricky part. But suppose, for the sake of argument, that we have good reason to think that it will. Then we'll help it. So it's good for the AI if we have good reason to think this. And it can't be good reason unless the AI actually does it. So it will.

alternatively sell empty boxes labelled "Don't look!"

If it decreases the number of people who take you seriously and therefore learn bout the substance of your ideas its a bad strategy

0Richard_Kennaway
And if it increases the number of people who take you seriously, and therefore learn about the substance of your ideas, it's a good strategy. I'm sure we can all agree that if something were bad, it would be bad, and if it were good, it would be good. Your point?

Yeah that would be a much better response. Or alternatively get someone who is more suited to PR to deal with this sort of thing

Does MIRI have a public relations person? They should really be dealing with this stuff. Eleizer is an amazing writer but he's not particularly suited to addressing a non-expert crowd

That response in /r/futurology is really good actually, I hadn't seen it before. Maybe it should be reposted (with the sarcasm slightly toned down) as a main article here?

Also kudos to Eleizer for admitting he messed up with the original deletion.

Load More