All of matejsuchy's Comments + Replies

Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it's a dead end), etc. I actually don't think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I'd like to read.

I don't kno... (read more)

2Donald Hobson
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.  Mathematics is near the other end of the scale. If you know how to prove theorem X, you've proved it. This stops us being confident that a theorem won't be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.  I think AI is fairly close to the maths, most of the effort is figuring out what to do. Ways my statement could be false. If we knew the algorithm, and the compute needed, but couldn't get that compute. If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.    But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren't just one clever idea away from AGI.
2ChristianKl
The question is not just "how much is needed" but also "what's a reasonable difference between the new digital mind and the biological sustance". 

Well, I didn't mean to propose an argument.

My impression is that there is not a convincing roadmap. I certainly haven't seen one. However, I recognize that there is a healthy possibility that there is one, and I just haven't seen it.

Which is why I'm asking for the white paper / textbook chapter that presumably has convinced everyone that we can expect AGI in the coming decades. I would be very grateful for anyone who could provide it.

Obviously, AGI is feasible (more than could be said for things like nanotech or quantum computing). However, it's feasible i... (read more)

4Donald Hobson
I think this post sums up the situation. https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence If you know how to make an AGI, you are only a little bit of coding before making it. We have limited AI's that can do some things, and aren't clear what we are missing. Experts are inventing all sorts of algorithms.  There are various approaches like mind uploading, evolutionary algorithms etc that fairly clearly would work if we threw enough effort at them. Current reinforcement learning approaches seem like they might get smart, with enough compute and the right environment.  Unless you personally end up helping make the first AGI, then you personally will probably not be able to see how to do it until after it is done (if at all). The fact that you personally can't think of any path to AGI does not tell us where we are on the tech path. Someone else might be putting the finishing touches on their AI right now. Once you know how to do it, you've done it.

Can you refer me to a textbook or paper written by the AGI crowd which establishes how we get from GPT-n to an actual AGI? I am very skeptical of AI safety but want to give it a second hearing.

2Pattern
Why is this a response to the parent comment?
2Daniel Kokotajlo
It sounds like you think that actual AGI won't happen? And your argument is that we don't have a convincing roadmap for how to get there?

China's sciences are not very good, and relatedly most of those papers are likely of extremely low quality. I know Chinese, and it's a wonderful language, but I wouldn't recommend learning it for that purpose. My 2c

6ChristianKl
While most of the papers don't get translated Chinese authors generally get rewarded more for publications in high impact factor journals. That means that a good Chinese scientist currently publishes in English and the Chinese language paper will be on average crap. On the other hand China is progressing and very nationalist, so there's a good chance that some fields will progress to publish high quality research in China sooner or later.

I'm having trouble finding it. It was a survey done by David Putrino, it's mentioned here:
"By contrast, Putrino told me that in his survey of 1,400 long-haulers, two-thirds of those who have had antibody tests got negative results, even though their symptoms were consistent with COVID-19."

https://www.theatlantic.com/health/archive/2020/08/long-haulers-covid-19-recognition-support-groups-symptoms/615382/

Here is a more vague claim that seems to corroborate:

"Whereas some “long haulers” were found to be positive for SARS-CoV-2 RNA by RT-PCR at symptom onset, m... (read more)

2romeostevensit
I expect symptoms-consistent-with is broad enough to interact with a whole lot of stuff that is going on medically and culturally.

As for "long covid" itself, my sense from talking with GPs is that it's mostly misattributed. There's the notorious study which showed that 2/3 of "long covid sufferers" had never been infected with C19 to begin with. It seems like it's just somewhat stronger-than-usual depression? All the risk factors for "long covid" seem to just be risk factors for depression.

On the matter of vaccine effectiveness, do we know what the numbers are for obese vs non-obese? Vaccines commonly don't work (well) for the obese, and given how overweight America is I wonder if this is depressing our numbers. Maybe it's like 98% for thin, 70% for overweight, 40% for morbidly obese or something like that?

4TurnTrout
Do you happen to have a link on hand?

Bullshit jobs are a lot of it. I'd add anything in media, a lot of academia (shocking numbers of those who'd be better off running a plumbing business in some depts), and non-profit / political activism stuff

Yes, it certainly is too short. I fear some of my writing is too long-winded, and wanted to try the whole "most blog posts should be a tweet" thing. Evidently, this is not the most effective strategy.

I don't intend to use education as interchangeable for sanity. Here "sane ideology" is just a cultural belief that maximizes utility. The three ideologies here are: "education is not worth it", "pursue education according to your ability", "get a masters as long as you're not brain dead," which are espoused by much of the lower, middle, and upper classes, resp... (read more)

2gilch
Does that really follow? From my perspective a lot of the culture war stuff misses the point. Sometimes both sides have a good reasons to be upset with the other, but arguments are soldiers. Sometimes both sides are wrong and there's a third way that isn't even part of the conversation. The current rationalist culture disagrees with the mainstream on numerous points, but not necessarily in ways that fall into the Overton window. It's difficult to talk about this so abstractly with no examples. Are there any examples you could use that the rationalists already mostly seem to agree on? Is it possible you're simply wrong about these? Have you considered double-cruxing? Perhaps you could introduce it allegorically? Write a fictional story illustrating the point. The insane probably won't get it.
2gilch
Bullshit Jobs? It's not entirely obvious to me what you're referring to.

Ok so I did some reading and my sense is that obligate homosexuality is not very common in the type of matriarchal hunter-gatherer societies you mention (and is not found in wild animals), but is found in domesticated humans and animals. There does appear to be some genetic component as there is a bit of heritability. The obvious question is if there is some selection effect present in domestic environments not present in the wild.
 

There are two hypotheses which seem somewhat plausible; in both, the gene persists largely due to low mate choice on beha... (read more)

2Viliam
Ancient Greece? Maybe Japan? Dunno, never cared about this topic deeply, but I assume the obvious candidates would be countries without Christianity or Islam.

Do you believe it? An obligate homosexual sibling would need to help their siblings have an additional 4 children who survive to reproduction in order to break even. That is a significant burden — especially given infant mortality rates in the ancestral environment, we're potentially talking about 8 additional pregnancies, at which point it seems implausible.

Ockham's razor might tell us that LG, where the individual has a mind which motivates against reproduction, are simply the consequence of some developmental failure?

2Viliam
We do not live in the ancient environment we evolved for. I agree that today in developed countries, each homosexual person having 4 extra nieces/nephews sounds very unlikely -- most people don't have 4 kids. But in the ancient jungle... okay, I am not sure what exactly was the most likely reason for death, but I would guess war for men, childbirth for women, and dying as a child from malnutrition or disease or accident for both. Women having maybe 15 kids on average, of whom maybe 3 reach adulthood... and especially if you have matriarchy, that is kids belong to their mother, fatherhood is an unknown concept, men naturally take care of their nieces/nephews... especially considering that most men did not reproduce anyway... it sounds possible. (It would make even more sense from the evolutionary perspective as a conditional response: a gene which, if you are male, makes you gay if you are below-average masculine compared to the rest of your tribe. You were most likely not going to reproduce anyway, don't risk your life fighting the local alpha male, focus on feeding and protecting your relatives instead.) Makes me wonder... in societies without homophobia, do gays live longer than heterosexual men on average?

Thanks for offering this insight! Could you clarify how those things are selected for in training? I am actually struggling to imagine how they could be selected for in a BUD/S context — so sharing would be helpful!

Also, you say that the training had effects but "not to that magnitude ... not necessarily even in that direction." I'm confused — it sounds like your friend enjoyed effects both to that magnitude and in that direction. Am I misunderstanding? 

Also, if he did enjoy such effects as you describe, do you have any hypotheses for the mechanism? Given that such radical changes are quite rare naturally, we'd expect there to be something at play here right?

 

7jimmy
(Army special forces, not SEALs) Scrupulosity: They had some tough navigation challenges where they were presented with opportunities to attempt to cheat, such as using flashlights or taking literal shortcuts, and several were weeded out there. Reliability: They had peer reviews, where the people who couldn't work well with a team got booted. Depends on what exactly you mean by "reliability", but "we can't rely on this guy" sounded like a big part of what people got dinged for there. "Viewing life as a series of task- oriented challenges" seems like a big part of the attitude that my friend had that helped him do very well there, even if a lot of it comes through as persistence. Some of it is significantly different though, like in SERE training where the challenge for him wasn't so much "don't quit" so much as it was "Stop giving your 'captors' attitude you dummy. Play to win.". Yeah, that was poorly explained, sorry about that. The "magnitude" is less than it seems at a glance for a couple reasons. He wasn't a "pot smoking slacker" because he lacked motivation to do anything, he was a "pot smoking slacker" because he didn't have respect for the games he was expected to play. When you look at him as a 12 year old kid, you wouldn't think of him joining the military and waking up early with a buzz cut and saying "Yes sir!". But when you hear he joined the special forces in particular, it's not "Wow! To think he could grow up to excel and take things seriously!", it's "Hm. The military aspect is a bit of a twist, but it makes sense. Kid's definitely the right kind of crazy". He was always a bit extreme, it's just that the way it came out changed -- and the military training was at least as much an effect of the change as it was a cause. It didn't come out in studying hard for straight As in college or anything that externally obvious, but there were some big changes before he joined the military. For example, he ended up deciding that there was something to the C

Thank you for offering feedback! The study you mentioned also references another that may indicate that further studies could be helpful to determine whether there is an effect "The results of McDonald, et al. (1988) suggest, inconclusively, that some personality changes may occur during SEAL training" (p 12). Generally speaking, your criticism is well-taken; I agree that the SEAL example is a difficult one because of the strong selection effects. Generally speaking, one should a priori expect more composite conscientiousness in any elite group (except may... (read more)

3jimmy
  I kinda fit that. I know someone who went from a "pot smoking slacker" to "elite and conscientious SOF badass", which kinda looks like what you're talking about from afar.  However, my conclusions from actually talking to him about it all before, during (iirc?), and after are very different. The training seems to be very very much about selection, everyone who got traumatized was weeded out, and things like "reliable, and scrupulous, viewing life as a series of task- oriented challenges" were all selected for. The training did have some effects, but not to that magnitude, not by that mechanism, and not necessarily even in that direction.