All of devi's Comments + Replies

devi120

Please see my comment on the grandparent.

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

devi990

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". ... (read more)

devi20

However, this likely understates the magnitudes of differences in underlying traits across cities, owing to people anchoring on the people who they know when answering the questions rather than anchoring on the national population

I think this is a major problem. This is mainly based on taking a brief look at this study a while back and being very suspicious of it explicitly contradicting so many of my models (eg South America having lower Extraversion than North America and East Asia being the least Conscientious region)

devi280

The causal chain feels like a post-justification and not what actually goes on in the child's brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).

devi20

I just remembered that I still haven't finished this. I saved my survey response partway through, but I don't think I ever submitted it. Will it still be counted, and if not, could you give people with saved survey responses the opportunity to submit them?

I realize this is my fault, and understand if you don't want to do anything extra to fix it.

3Robi Rahman
Someone said elsewhere in this thread that if you stop in the middle of the survey, it does record the answers you put in before quitting.
devi00

I wasn't only referring to wanting to live where there are a lot of people. I was also referring to wanting to live near to very similar/nice people and far from very dissimilar/annoying people. I think the latter, together with the expected ability to scale things down, would make people want to live in smaller, more selected, communities. Even if they were in the middle of nowhere.

2ChristianKl
People basically want to live where they can find a well-paying job. In the great leap forward Mao thought that the factories being in cities was simple a coordination problem. He then declared to move them outside of the cities where they were grown organically. It was a disaster. A big company like Google could theoretically move it's business headquarters to the middle of nowhere. On the other hand that would likely be a very bad business decision. It's employees wouldn't simply want to move to the middle of nowhere.
devi00

Where people want to live depends on where other people live. It's possible to move away from bad Nash equilibria by cooperation.

2ChristianKl
To the extend that people want to live where other people live it's useful to have a high density. Flat buildings aren't optimal for cities even when they are cheap to build.
devi00

Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.

After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not... (read more)

1AlexMennen
If value alignment is sufficiently harder than general intelligence, then we should expect that given a large population of strong AIs created at roughly the same time, none of them should be remotely close to Friendly.
devi50

It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including some... (read more)

It's important to remember the scale we're talking about here. A $1B project (...) in such an explosive field

I was sure this sentence was going to complete with something along the lines of "is not such a big deal". Silicon Valley is awash with cash. Mark Zuckerberg paid $22B for a company with 70 employees. Apple has $200B sitting in the bank.

8AlexMennen
Not necessarily. In a multi-polar scenario consisting entirely of Unfriendly AIs, getting them to cooperate with each other doesn't help us.
devi10

They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).

Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well... (read more)

devi40

Does Java (the good parts) refer to the O'Reilly book with the same name? Or is it some proper subset of the language like what Crockford describes for Javascript?

6Darmani
It's more like the Crockford book -- a set of best practices. We use a fairly functional style without a lot of moving parts that makes Java very pleasant to work with. You will not find a SingletonFactoryObserverBridge at this company.
devi70

Is the idea to get as many people as possible to sign this? Or do we want to avoid the image of a giant LW puppy jumping up and down while barking loudly, when the matter finally starts getting attention from serious people?

8lukeprog
After the first few pages of signatories, I recognize very few of the names, so my guess is that LW signers will just get drowned in the much larger population of people who support the basic content of the research priorities document, which means there's not much downside to lots of LWers signing the open letter.
devi170

Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, ...

The author makes it sound like this makes us a very male-dominated straight cisgender community.

Mostly male, sure. But most people won't compare the percentage of heterosexuals and cisgenders with that of the general population to note that we are in fact more diverse.

Never mind comparing; simply writing "Women made up 11.2% of respondents, 21.3% were not straight..." would have put a very different spin on it.

devi20

But how can you take issue with our insistence that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?

This is not something that would cross my mind if I was organizing such a retreat. Making sure people who handled food washed their hands with soap, yes, but not hand sanitizer. Perhaps this is a cultural difference between (parts of) US and Europe.

5gwillen
I think hand sanitizer is more feasible for practical reasons? Generally in the sorts of spaces where people gather for things like this, there is not a sink near the food. So I'm used to there being hand sanitizer at the beginning of the food line, not because hand sanitizer is great, but because it's inconvenient and time consuming (and overbearing) to ask everyone to shuffle through the restroom to wash their hands before touching the food.
7Metus
US Americans are overly obsessed with hygiene from the point of view of the average European.
devi50

It may be more exciting, but the HoTT book has a bad habit of sending people down the homotopy rabbit hole. People with CS backgrounds will probably find it easier to pick up other type theories. (In fact, Church's "simple type theory" paper may be enough instead of an entire textbook... maybe I'll update the suggestions.)

Yeah, it could quite easily sidetrack people. But simple type theory, simply wouldn't do for foundations since you can't do much mathematics without quantifiers, or dependent types in the case of type theory. Further, IMHO, t... (read more)

devi90

The recommended order for the papers seems really useful. I was a bit lost about where to start last time I tried reading a chunk of MIRI's research.

The old course list mentioned many more courses, in particular ones more towards Computer Science rather than Mathematics (esp. there is no AI book mentioned). Is this change mainly due to the different aims of the guides, or does it reflect an opinion in MIRI that those areas are not more likely to be useful than what a potential researcher would have studied otherwise?

I also notice that within the subfields ... (read more)

9Rob Bensinger
It looks like Artificial Intelligence: A Modern Approach has been re-added to the up-to-date course list, near the bottom. Some of the books removed from the old course list will get recommended in the introduction to the new Sequences eBook, where they're more relevant: The Oxford Handbook of Thinking and Reasoning, Thinking and Deciding, Harry Potter and the Methods of Rationality, Good and Real, and Universal Artificial Intelligence. Boolos et al.'s Computability and Logic replaces Mendelson's Introduction to Mathematical Logic, Sipser's Introduction to the Theory of Computation, and Cutland's Computability. Jaynes' Probability Theory and Koller/Friedman's Probabilistic Graphical Models replace Mitzenmacher/Upfal's Probability and Computing and Feller's Introduction to Probability Theory. Godel Escher Bach and the recommendations on functional programming, algorithms, numerical analysis, quantum computing, parallel computing, and machine learning are no more. If people found some of the removed textbooks useful, perhaps we can list them on a LW wiki page like http://wiki.lesswrong.com/wiki/Programming_resources.
9So8res
Thanks! :-D Let me know if you want any tips/advice if and when you start on another read-through. Mostly different aims of the guides. I think Louie's criterion was "subjects that seem useful or somewhat relevant to FAI research," and was developed before MIRI pivoted towards examining the technical questions. My criterion is "prerequisites that are directly necessary to learning and understanding our active technical research," which is a narrower target. This is representative of the difference --- it's quite nice to know what modern AI can do, but that doesn't have too much relevance to the current open technical FAI problems, which are more geared towards things like putting foundations under fields where it seems possible to get "good enough to run but not good enough to be safe" heuristics. Knowing how MDPs work is useful, but it isn't really necessary to understand our active research. Not really. Rather, that particular Model Theory textbook was rather brutal, and you only need the first two or so chapters to understand our Tiling Agents research, and it's much easier to pick up that knowledge using an "intro to logic" textbook. The "model theory" section is still quite important, though! It may be more exciting, but the HoTT book has a bad habit of sending people down the homotopy rabbit hole. People with CS backgrounds will probably find it easier to pick up other type theories. (In fact, Church's "simple type theory" paper may be enough instead of an entire textbook... maybe I'll update the suggestions.) But yeah, HoTT certainly is pretty exciting these days, and the HoTT book is a fine substitute for the one in the guide :-)
devi60

I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.

Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by so... (read more)

3mvp9
A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test. It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.
2KatjaGrace
A somewhat limited effort to reduce tasks to one another in this vein: http://www.academia.edu/1419272/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
devi30

This sounds like an interesting project. I've studied quite some category theory myself, though mostly from the "oh pretty!" point of view, and dipped my feet into algebraic geometry because it sounded cool. I think that reading algebraic geometry with the sight set on cryptography would be more giving than the general swimming around in its sea that I've done before. So if you want a reading buddy, do tell. A fair warning though: I'm quite time limited these coming months, so will not be able to keep a particularly rapid pace.