All of xepo's Comments + Replies

Interior design, please!  I can never figure out which pieces of furniture will actually look good together or flow nice in a home.  Especially when combined with lighting and shelves and art.  

Does the Scott Alexander post lay this out? I am having difficulty finding it. 

He doesn’t really. Here’s the original article:

https://www.astralcodexten.com/p/mr-tries-the-safe-uncertainty-fallacy

Also there was a long follow-up where he insists 50% is the right answer, but it’s subscriber-only:

https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic

I claim the problem is that our model is insufficient to capture our true beliefs.

There’s a difference in how we act between a coin flip (true 50/50) and “are bloxors greeblic?” (a question we have no info about).

For example, if our friend came and said “Yes, i know this one, the answer is (heads|yes)”. For coin flip you’d say “are you out of your mind?” and for bloxors you’d say “Ok, sure, you know better than me”

I’ve been idly pondering over this since Scott Alexander’s post. What is a better model?

One option would be to have another percentage — a meta-... (read more)

1CrimsonChin
  This is a model that I always tend to fall back on but I can never find a name for it so find it hard to look into. I have always figured I am misunderstanding Bayesian statistics and somehow credence is all factored in somehow. That doesn't really seem like the case though.  Does the Scott Alexander post lay this out? I am having difficulty finding it.  The closest term I have been able to find is Kelly constants, which is a measure of how much "wealth" you should rationally put into a probabilistic outcome. Replace "wealth" with credence and maybe it could be useful for decisions but even this misses the point! 
1ProgramCrafter
It's possible to do such modelling with beta-distributions (actually similar to meta-probabilities). Combination of B(1;1) (something like non-informative prior) and B(a;b) (information obtained from friend) will be B(1+a;1+b) - moved from equal probabilities far more than combination B(1000;1000)⋅B(a;b)=B(1000+a;1000+b).

I don’t understand this.  Plus I suspect it was largely written by an LLM.  

First of all, where does this theory come from?  Did you invent it?  how much evidence does it have?

The rope analogy seems like it doesn’t offer much.    I don’t see any intuition-pumps that the rope gives you that simply talking about challenges and rewards wouldn’t.  Plus there’s so much in this that isn’t explained by the analogy, for example:

However, by taking on challenging projects that align with the employee's skills and interests and pro

... (read more)
0Eris Discordia
The base idea is that your perception of the value of that breakfast is shaped as much with the effort your brain thinks it's going to take to get you to keep getting that breakfast as it is by your tastebuds. It is meant to describe what I believe is an already known phenomenon on motivation in a metaphor that is easy for people to engage in when attempting to hack their own reward system.

Most of your arguments hinge on it being difficult to develop superintelligence. But superintelligence is not a prerequisite before AGI could destroy all humanity. This is easily provable by the fact that humans have the capability to destroy all humanity (nukes and bioterrorism are only two ways).

You may argue that if the AGI is only human level, that we can thwart it. But that doesn’t seem obvious to me, primarily because of AGI’s ease of self-replication. Imagine a billion human intelligence aliens suddenly pop up on the internet with the intent to destroy humanity. It’s not 100% to succeed, but seems pretty likely they would to me.

2Shmi
This is a fair point, and a rather uncontroversial one, increasing capabilities in whatever area lowers the threshold for a relevant calamity. But this seems like a rather general argument, no? In this case it would go as "imagine everyone having a pocket nuke or a virus synthesizer".

No. With unspecified units, that's saying (energy - x) of sodium = 8 * (energy - x) of water. For celcius, x = 273.15.

I don’t think you understood my point, but I was a little wrong anyway. Turns out bill gates was close enough: https://en.wikipedia.org/wiki/TerraPower

Sodium offers a 785-Kelvin temperature range between its solid and gaseous states, nearly 8x that of water's 100-Kelvin range.

There are nuclear plant designs using natural convection with water for emergency cooling.

ok? Was he trying to compare with those designs? Or the ones that c... (read more)

2bhauth
What would you consider good evidence?

I actually think gates’ article was pretty reasonable and don’t think you should read as much into it as you are. To be fair, I’m not a physicist, and don’t know anything about this tech and very little about nuclear reactors in general, so I might phrase some of my objections as questions back to you.

Part of the reason I think it’s reasonable is that it’s marketing material more than anything, and if you give him the benefit of the doubt on his exact phrasing, or interpret in the context he means, then there’s rational explanations.

Gates is using unspe

... (read more)
2bhauth
No. With unspecified units, that's saying (energy - x) of sodium = 8 * (energy - x) of water. For celcius, x = 273.15. There are nuclear plant designs using natural convection with water for emergency cooling. Because when I look up my half-assed ideas they're often close to what people use today or what people on the cutting edge are researching. Because when I get to talk to people involved in things, I can tell how smart they are relative to me. These are not disagreements among serious nuclear engineers. Gates just found a bunch of clowns instead.

Is it possible that the disconnect is that you‘re valuing technical ability over being good at people+management?  Most high level executives don’t need to understand these things in detail, because they have other people they trust that do understand it.

Powerpoints need to be 5-word phrases because that’s how you should communicate with crowds.  And it’s not simply about reducing complexity to the lowest common denominator (though that is part of it).  It’s more about how getting any team of more than a few people to do anything at all toge... (read more)

3DirectedEvolution
  There are inaccuracies in the article, period. It would be embarassing for an engineer to make the mistake of conflating Celsius and Kelvin when comparing boiling point ratios, as in the claim that sodium's boiling point is 8x higher than that of water. Bill Gates' audience is going to have a number of technically savvy people in it, he knows it, and this alone is a college freshman/high school-level mistake. There are others. My update on reading the article is, in fact, to downgrade my perception of Bill Gates' technical expertise beyond the world of computer software and hardware, and to trust his ability to communicate science less. That said, nobody needs to be an expert in every subject, and it might be that Gates' wealth and diverse interests and fame simply put him in a position to try and interpret areas of science he's not able to understand adequately. He's unusual for a billionaire founder/CEO figure, and I personally wouldn't update too much on his mistake here as evidence about the ability of other CEOs to understand their company's specific technology to a level of depth adequate to run the business well. But I would put some probability mass into "CEOs are, in general, shockingly bad at understanding the technologies and products their company sells and they also don't have the ability to tell who in their company does understand what their company is selling."
3bhauth
No, they don't. They are unable to tell the difference between technical competence and BS. That's why Elon's companies are relatively successful despite his autism and mediocre understanding. US corporate executives are now selected largely for skill at "moral mazes" and I don't think "being good at people+management" is an entirely accurate description of that. They're good at dealing with similar people, who - being similar - are also not good at actually doing things. Amazon banning Powerpoint worked out pretty well for it. Maybe all the theorizing about it actually being good was just justification. I'd like to believe that, but no. It was not simplification for the common people. It was an accurate overview of how Bill Gates actually understands the technology involved and why he likes it. I'm not suggesting copying the Chinese government, but China is doing a better job at a lot of stuff than the USA now - the USA seems to be largely coasting on past success while institutional quality declines.

Oh, I actually think those studies are probably accurate for the thing they’re measuring, which is ”short-term individual developer productivity”.  But they don’t really account for “long-term productivity” nor “team productivity”, both of which I think benefit a lot from being in the office.   You get an uptick in people’s ability to focus, but downtick in people’s ability to communicate, and both education and coordination are dependent on the latter.

As a counterpoint, consider that ~every major tech company is constantly pushing for people to ... (read more)

Programmers don't become more productive when they move to Silicon Valley or Seattle or NYC.

Why do you believe this?   I definitely became a better developer when I moved to NYC.  How do you know that everyone else didn’t either?

 

correlation of wages with housing prices and with wealth is stronger.

I think this is just recursive?  Of course wages are higher in places with more wealth.  Higher wages causes more wealth.  And housing prices can follow, just cause of supply+demand (there’s a higher supply of dollars, so people are ... (read more)

2Dagon
"programmers" is too large and diverse a group to meaningfully discuss.  For this purpose, it's sufficient to say "_SOME_ programmers become more productive when able to interact in-person with other workers and experts, which are currently concentrated in a few cities".  And that is a pretty easy claim to make - at least xepo and I assert it to be the case for ourselves. I've done a lot of hiring, engineer evaluations, and related attempts at productivity-measurement for a very large software company.  I can say with certainty that there was good evidence that the enforced shift to WFH was a step-change loss in productivity, only some of which has come back.  I will also say that it's NOT evenly distributed - some teams and individuals did recover quickly (and even benefitted).  The median and mean was quite negative, though.  Standard caveats: measurement is based on imprecise proxies, and Goodhart may have made it even more variable: it was a visible excuse for a performance drop, rather than trying to game the metrics to look good.
1bhauth
I think the net value added per programmer-hour of some open-source projects where everyone works remotely is far higher than anything done at an office for a corporation. Do you disagree about that?

The legalizer is the only thing here that isn’t inherently evil. The others may not be end-the-world kind of evil, but still evil.

To expound on the first two: they’re morally wrong because they’re lying. Your explanation of why they’re ok seems like some very short-sighted utilitarian thinking.

1intellectronica
It's hard to define what is inherently evil. Your definition (IIUC) includes all lying. I don't think I can agree to that. Everyone lies all the time, and most of these acts or lying couldn't be classified as evil.

why are you trying to attack instead of educate? 

 

90% of your article is “rationalists do it wrong”.  Why?  Who cares?  Teach us how to do it better instead of focusing on how we’re doing it wrong.  

-5AnthonyRepetto
Answer by xepo43

You’re thinking of money as being more central than it is.  Instead, try shifting your view back to barter days, where currency is just another thing that can be traded for.  So, if you had some corn to sell, you could sell it for $3, or you could sell it for 5 cans of beans.  Another way to phrase that is: You could use your corn to buy 5 cans of beans, or you could use your corn to buy $3.

Now, imagine that the stock market, instead of being valued in currency, was valued in the amount of whatever random good you want.   E.g. the s&... (read more)

and also very smart/impressive/competent etc

 

My theory is that being a politician in the way that presidents have to be is genuinely extremely difficult.  Of course they make gaffs and make stupid mistakes, but that’s because they have the difficulty level set to “stupidly insane”.  And that most people that are in those roles would actually seem much more impressive if you swapped them with the millions of americans you’re talking about.

There’s some intuitiveness about this: Look at any modern day campaign trail — it’s public speaking, Q&am... (read more)

Thanks for posting this!  will try it.

 

The trick that works for me when I have too many things in my head and it’s keeping me from sleeping is to pull out my phone/laptop, and write them all down.  Just stream-of-consciousness. Write everything you can think of, until you run out of things to write.  Pause for a second looking for more things to write and if nothing comes to you, then turn off the phone and go to sleep.

It just clears my head and lets me stop circling.

I’m not sure I fully understand the original argument, but let me try. Maybe it’ll clarify it for me too.

You’re right that I would choose L on the same basis you describe. But that’s not a property of the world, it’s just a guess. It’s assuming the conclusion — the assumption that “I” is randomly distributed among the clones. But what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has di... (read more)

1benjamincosman
A lot of this appears to apply to completely ordinary (non-self-localizing) probabilities too? e.g. I flip a coin labeled L and R and hide it in a box in front of you, then put a coin with the opposite side face up in a box in front of Bob. You have to guess what face is on your coin, with payouts as in my game 1. Seems like the clear guess is L. But then and yet this time it's all classical probability - you know you're you, you know Bob is Bob, and you know that the coin flips appearing in front of you are truly random and are unrelated to whether you're you or Bob (other than that each time you get a flip, Bob gets the opposite result). So does your line of thought apply to this scenario too? If yes, does that mean all of normal probability theory is broken too? If no, which part of the reasoning no longer applies?
3benjamincosman
Also am I modelling dadadarren correctly here: """ Game 1 Experimenter: "I've implemented the reward system in this little machine in front of you. The machine of course does not actually "know" which of L or R you are; I simply built one machine A which pays out 1000 exactly if the 'I am L' button is pressed, and then another identical-looking machine B which pays out 999 exactly if the 'I am not L' button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?" Fissioned dadadarren: "This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn't changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to..." Experimenter: "Let me interrupt you for a moment, I decided to add one more rule: I'm going to flip this coin, and if it comes up Heads I'm going to swap the machines in front of you and your other clone. flip; it's Tails. Ah, I guess nothing changes; you can proceed with your original plan." Fissioned dadadarren: "Actually this changes everything - I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the 'I am L' button is obviously the best choice." """ Is this how it would go - would watching a coin flip that otherwise does not affect the world change the clone's calculation on what the correct action is or if a correct action even exists? Because while that's not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.

I ~entirely agree with you.

 

At some point (maybe from the beginning?), humans forgot the raison d’etre of capitalism — encourage people to work towards the greater good in a scalable way.  It’s a huge system that has fallen prey to Goodhart’s Law, where a bunch of Powergamers have switched from “I should produce the best product in order to sell the most” to “I should alter the customer‘s mindset so that they want my (maybe inferior) product”.  And the tragedy of the commons has forced everyone to follow suit.

Not only that, the system that c... (read more)

-2Valentine
Yep. The USA Constitution was an attempt to human-align an egregore. But it was done in third person, and it wasn't mathematically perfect, so of course egregoric evolution found loopholes.   Thank you! By Karl Schroeder?

Did you set up the survey in a way that you can treat the people who haven’t had Covid as a control?

If not, I’m afraid this is gonna be pretty inconclusive — my best explanation is that people are blaming ~every health ailment they have on long Covid, even if it’s unrelated.

I think one of Zvi’s recent posts highlighted a study that convinced him that long Covid mostly wasn’t a thing, but I can’t seem to find it now.

2KatjaGrace
n probably too small to read much into it, but yes: https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data
5gabrielrecc
UK's ONS has a nice comparison with controls which shows a clear difference, see Fig 1. (Note that this release uses laboratory-confirmed COVID-19 only, unlike some of their other releases.)
2arunto
Based on a study by the University Mainz (Germany) it seems to me that long Covid is real, but not necessarily if you look at the specific symptoms thought to be associated with long covid. They compared three groups:  Group 1 Covid patients ("wissentlich infizierte") Group 2 persons with Covid antibodies not knowing that they had Covid ("unwissentlich infizierte") Group 3 persons without Covid antibodies ("ohne Infektion") a) Looking at a list of possible long Covid symptoms 59.5% of group 1 were asymptomatic, 60.4% of group 2 and 54.3% of group 3. Serious long Covid symptoms 7.3% in group 1, 9.3% in group 2 and 11.3% in group 3. [slides 18 and 21]. Taking this at face value would indicate a small protective effect for getting Covid symptoms against long Covid (not the official conclusions of that study, of course, and mine neither, but it would have been such an amusing headline). b) Looking at the subjective health state, however, yielded more plausible results: 29.8% of group 1 (knowingly infected) reported worse health compared to before the pandemic, whereas 22.4% of group 2 (unknowingly infected) and 22.0% of group 3 (not infected) [see slide 13]. Maybe the difference between group 1 and groups 2+3 could be seen as a rough estimate for long Covid (my conclusion, not necessarily the study's), that would put the risk for long Covid at about 7.5%. Of course, there are factors that could lead to this estimate being too low (having had Covid reducing the anxiety related health problems compared to the other groups; then the organ based health problems for group 1 could be more frequent than 7.5% to get to the same overall results) or too high (persons who know they had Covid think they should say that their health is worse, e.g. because of the discussion about long Covid.).
4Elizabeth
I'm very curious about this as well. I expect MTurk (which Positly is built on) to disproportionately draw from people willing to tolerate a low wage for increased flexibility, who are disproportionately disabled.
5Bezzi
I suppose that you mean the paper linked in this post: That said, I definitely know people with ongoing health problem after recovering from covid, and I would be really confused if this turned out to be just a belief.