AALWA: Ask any LessWronger anything
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (611)
Discussion of this post goes here.
I think this is a really cool post idea. LW has a well-above-average user base, and sharing knowledge and ideas publicly can be a great boon to the community as a whole.
Yes, this is a really nice open thread that seems to be working well.
Why did you make this post Will? Wait I guess you didn't comment here volunteering to answer questions.
Anyway I guess I can answer questions but I'm pretty lazy and not very educated so ask at your own risk.
You're asking me why? I did it 'cause I was bored.
I'll probably jump in if others do, otherwise it's too narcissistic as the creator of the post.
Will have you ever had an encounter with the divine?
Fo sho.
If anyone's interested (ha!), then sure, go ahead, ask me anything. (Of course I reserve the right not to answer if I think it would compromise my real-world identity, etc.)
N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.
Why are you hiding your real identity? Don't you fear that in a few years programs, available to the general public, will be able to match writing patterns and identify you?
For what it's worth I posted this with my main account and not with a sockpuppet precisely to ensure the exclusion of Eliezer.
I've read the sequences and have a pretty solid grip on what the LW orthodox position is on epistemology and a number of other issues - anyone need some clarification on any points?
Could you summarise the point of/ the conclusions of the posts about second order logic and Gödel's theorems in the Epistemology Sequence? I didn't understand them, but I'd like to know where they were heading at least.
I don't quite have the mathematical background and sophistication to grok those posts as well, but I did get their purpose - to hook mathematicians into thinking about the open problems that Eliezer and MIRI have identified as being relevant.
I'm guessing you think free will is a trivial problem, what about consciousness? That still baffles me.
The most apt description I've found is something along the lines of "consciousness is what information-processing feels like from the inside."
It's not just about the what a brain does, because a simulated brain would still be conscious, despite not being made of neurons. It's about certain kinds of patterns of thought (not the physical neural action, but thought as in operation performed on data). Human brains have it, insects don't, anything in between is something for actual specialists to discuss. But what it is - the pattern of data processing - isn't all that mysterious.
How do you know?
I find it awfully suspicious that the vast majority of humans talk about experiencing consciousness. It'd be very strange if they were doing so for no reason, so I think that the human brain has some kind of pattern of thought that causes talking about consciousness.
For brevity, I call that-kind-of-thinking-that-causes-people-to-talk-about-consciousness "consciousness".
Definition of "it has it if it talks about it" is problematic. You can make a very simple machine that talks about experiencing consciousness.
Okay but why does information processing feel like anything at all? There are cognitive processes that are information processing but you are not conscious of them.
Sure, you can ask me anything.
IIRC you are interested in educational games, any new thoughts in that area?
Depends on what you mean by new: I elaborated on some of my core ideas about the field in the blog posts Why edugames don't have to suck, Videogames will revolutionize school (not necessarily the way you think), and also touched upon their role in society in Doing Good in the Addiction Economy. My thoughts have gotten somewhat more precise, but off-hand I can't think of any major recent insights that I wouldn't have mentioned in those posts.
On the topic of the educational game that I'm doing for my Master's Thesis, I'm making slow but sure progress.
Yes, I had read those posts before which is why I knew you were involved in the field. Good luck with your thesis - I think games have huge potential in education, but it will be difficult because educational games are aiming at a smaller target than normal ones.
I have an idea for a video game that can teach microeconomics. It would create a persistent low-graphics world similar to what's in the game Travian and would require no artificial intelligence. Unfortunately, I can't program beyond the level of what they teach in codecademy. Do you have suggestions for people I could contact to get financial support for my game? I'm the author of a microeconomics textbook and so I think I have a credible background for this project.
What is the intended audience for this game? Why, do you think, people will play it?
Students taking introductory or intermediate microeconomics. Instructors would require their students to play.
Ah, so this is purely non-commercial, a course teaching aid, basically.
Can't you rope some grad students into doing this?
I would love to make money off of it, and have a revenue model but I would also be willing to do it for free.
My school doesn't have econ grad students. Also, it wouldn't be a good career move for a grad student who wanted to become a professor to devote lots of time to this.
So the target market is economics departments at other colleges/universities? You're are talking essentially about a piece of education software sold to institutions, not to end users/players.
In this case, I think, you'll have to make a business case for the proposition. I am not sure enough people will find this idea fun enough to contribute their time for free.
Another point: do you really have to develop a new game from scratch? Doing a mod of an existing game or engine is likely to be vastly simpler and cheaper.
Hmm. I haven't really looked into any actual funding agencies or the "getting money for this" side at this point, so I don't know much about that, but I can think of some researchers who might either have an interest in collaborating, or who could know more direct sources of funding. Two groups that come to mind who might be worth contacting in this regard are GAPS and Institute of Play. I'll let you know if I think of any others. (If you do contact them, I'd be curious to hear about the response.)
In the unlikely even that anyone is interested, sure, ask me anything.
Edit: Ethics are a particular interest of mine.
Any topics of interest? Same goes for other 'whatever's
Ethics, I suppose. Most of my other interests are either probably too mindkilling for LW or are written about in the Sequences already, more clearly than I could write about them.
What are your Sequence-superseded interests? Would you please name three points from anywhere within them where your opinion differs (even if minorly) from EY (or the author of the most relevant sequence if different)?
Would you rather fight one horse sized duck, or a hundred duck sized horses?
Is this a fist-fight or can blacktrance use weapons?
Depends on the situation. Do I have to kill whatever I'm fighting, or do I just have to defend myself? If it's the former, the horse-sized duck, because duck-sized horses would be too good at running away and hiding. If it's the latter, then the duck- horses, because they'd be easier to scatter.
I don't think I'm known around here, but sure why not. Ask me anything.
I'm a PhD student in artificial intelligence, and co-creator of the SPARC summer program. AMA.
What do you feel are the most pressing unsolved problems in AGI?
Do you believe AGI can "FOOM" (you may have to qualify what you interpret FOOM as)?
How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?
In AGI? If you mean "what problems in AI do we need to solve before we can get to the human level", then I would say:
To some extent this reflects my own biases, and I don't mean to say "if we solve these problems then we'll basically have AI", but I do think it will either get us much closer or else expose new challenges that are not currently apparent.
I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).
In general I think this is one of many possible scenarios, e.g. it's also possible that sub-human AI would already have control of much of the world's resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn't stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.
Not viable.
How did you come up with the course content for SPARC?
We brainstormed things that we know now that we wished we had known in high school. During the first year, we just made courses out of those (also borrowing from CFAR workshops) and rolled with that, because we didn't really know what we were doing and just wanted to get something off the ground.
Over time we've asked ourselves what the common thread is in our various courses, in an attempt to develop a more coherent curriculum. Three major themes are statistics, programming, and life skills. The thing these have in common is that they are some of the key skills that extremely sharp quantitative minds need to apply their skills to a qualitative world. Of course, it will always be the case that most of the value of SPARC comes from informal discussions rather than formal lectures, and I think one of the best things about SPARC is the amount of time that we don't spend teaching.
Do you have a handle on the size of the field? E.g. how many people, counting from PhD students and upwards, are working on AGI in the entire world? More like 100 or more like 10,000 or what's your estimate?
I don't personally work on AGI and I don't think the majority of "AGI progress" comes from people who label themselves as working on AGI. I think much of the progress comes from improved tools due to research and usage in machine learning and statistics. There are also of course people in these fields who are more concerned with pushing in the direction of human-level capabilities. And progress everywhere is so inter-woven that I don't even know if thinking in terms of "number of AI researchers" is the right framing. That said, I'll try to answer your question.
I'm worried that I may just be anchoring off of your two numbers, but I think 10^3 is a decent estimate. There are upwards of a thousand people at NIPS and ICML (two of the main machine learning conferences), only a fraction of those people are necessarily interested in the "human-level" AI vision, but also there are many people who are in the field who don't go to these conferences in any given year. Also many people in natural language processing and computer vision may be interested in these problems, and I recently found out that the program analysis community cares about at least some questions that 40 years ago would have been classified under AI. So the number is hard to estimate but 10^3 might be a rough order of magnitude. I expect to find more communities in the future that I either wasn't aware of or didn't think of as being AI-relevant, and who turn out to be working on problems that are important to me.
Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects?
(I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)
I work on computationally bounded statistical inference. Most theoretical paradigms don't have a clean way of handling computational constraints, and I think it's important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity.
Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree).
Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it's something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.
Is there any uptake of MIRI ideas in the AI community? Of HPMOR?
What does that question mean?
Sorry, typo now fixed. See my response to jsteinhardt below.
Like Mark, I'm not sure I was able to parse your question, can you please clarify?
Right, there was a typo. I've fixed it now. I'm just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.
And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it's popular in top physics departments.
I wouldn't presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I've only spent serious time at a few universities. However, I can speculate based on the data I do have.
I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong's existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I'm mostly going off of demographics, I don't know that many who have told me they read HPMOR.
There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we're going to end up killing everyone, although not for too long.
To address your comment in the grandchild, I certainly don't speak for Norvig but I would guess that "Norvig takes these [MIRI] ideas seriously" is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like "Hey you guys just said a bunch of stuff, based on what people in AI actually do, here's the parts that seem true and here's the part that seem false." It's also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. "Norvig takes the singularity seriously" seems much more likely to be true to me, though again, I'm far from being in a position to make informed statements about his views.
Thanks. I was basing my comments about Norvig on what he says in the intro to his AI textbook, which does address UFAI risk.
Ask me anything. I'm the author of Singularity Rising.
What, if anything, do you think a lesswrong regular who's read the sequences and all/most of MIRI's non-technical publications will get out of your book?
Along with the views of EY (which such readers would already know) I present the singularity views of Robin Hanson and Ray Kurzweil, and discuss the intelligence enhancing potential of brain training, smart drugs, and eugenics. My thesis is that there are so many possible paths to super-human intelligence and such incredible military and economic benefits to develop super-human intelligence that unless we destroy our high-tech civilization we will almost certainly develop it.
How much time did it take you to write the singularity book? How much money has it brought you?
Same question about your microeconomics textbook. Also, what motivated you to write it given that there must be about 2^512 existing ones on the market?
Hard to say about the time because I worked on both books while also doing other projects. I suspect I could have done the Singularity book in about 1.5 years of full time effort. I don't have a good estimate for the textbook. Alas, I have lost money on the singularity book because the advance wasn't all that big, and I had personal expenses such as hiring a research assistant and paying a publicist. The textbook had a decent advance, still I probably earned roughly minimum wage for it. Surprisingly, I've done fairly well with my first book, Game Theory at Work, in part because of translation rights. With Game Theory at Work I've probably earned several times the minimum wage. Of course, I'm a professor and part of my salary from my college is to write, and I'm not including this.
I wanted to write a free market microeconomics textbook, and there are very few of these. I was recruited to write the textbook by the people who published Game Theory at Work. Had the textbook done very well, I could have made a huge amount of money (roughly equal to my salary as a professor) indefinitely. Alas, this didn't happen but the odds of it happening were well under 50%. Since teaching microeconomics is a big part of my job as a college professor, there was a large overlap between writing the textbook and becoming a better teacher. My textbook publisher sent all of my chapters to other teachers of microeconomics to get their feedback, and so I basically got a vast amount of feedback from experts on how I teach microeconomics.
Did you see any shifts in opinion (even in a small audience) following on your book?
Not really. Someone (I forgot who) wrote that I helped them see the race to create AI as a potential existential risk. I promoted the book on numerous radio shows and I hope I convinced at least a few people to do further research and perhaps donate money to MIRI, but this is just a hope.
Why do you think that it is so hard to get through to people?
Not only you, but others involved in this, and myself, have all found that intelligent people will listen and even understand what you are telling them -- I probe for inferential gaps, and if they exist they are not obvious.
Yet almost no one gets on board with the MIRI/FHI program.
Why?
I have thought a lot about this. Possible reasons: most humans don't care about the far future or people who are not yet born, most things that seem absurd are absurd and are not worth investigating and the singularity certainly superficially seems absurd, the vast majority is right and you and I are incorrect to worry about a singularity, it's impossible for people to imagine an intelligence AI that doesn't have human-like emotions, the Fermi paradox implies that civilizations such as ours are not going to be able to rationally think about the far future, and an ultra-AI would be a god and so is disallowed by most peoples' religious beliefs.
Your question is related to why so few signup for cryonics.
I agree with you that a lot of people think that way, but I have spoken to quite a few smart people who understand all the points -- I probe to figure out if there are any major inferential gaps -- and they still don't get on the bandwagon.
Another point is simply that we cannot all devote time to all important things; they simply choose not to prioritize this.
I don't know about anyone else, but I find it hard to believe that provable Friendliness is possible.
On the other hand, I think high-probability Friendliness might be possible.
Why did you decide to run for Massachusetts State Senate in 2004? Did you ever think you had a chance of winning?
No. I ran as a Republican in one of the most Democratic districts in Massachusetts, my opponent was the second most powerful person in the Massachusetts State Senate, and even Republicans in my district had a high opinion of him.
Why did you run?
I wanted to get more involved in local Republican politics and no one was running in the district and it was suggested that I run. It turned out to be a good decision as I had a lot of fun debating my opponent and going to political events. Since winning wasn't an option, it was even mostly stress free.
I have a political question/proposition I have been pondering, and you, an intelligent semi-involved Massachusetts Republican, are precisely the kind of person who could answer it usefully. May I ask it to you in a private message?
Haven't read your book so not sure if you have already answered this.
what is your assessment of miri's current opinion that increasing the global economic growth rate is a source of existential risk?
How much risk is increased for what increase in growth?
Are there safe paths? (Maybe catch up growth in india and china is safe??)
Greater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I'm not sure how this nets out.
Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amount of resources companies such as Google devote to AI.
The only safe path is someone developing a mathematically sound theory of friendly AI, but this will be easier if we get (probably via China) intelligence enhancement with eugenics.
I write about causality sometimes.
How significant/relevant is the mathematical work on causality to philosophical work/discussion? If someone was talking about causality in a philosophical setting and had never heard of the relevant math, how badly would/should that reflect on them? Does it make a difference if they've heard of it, but didn't bother to learn the math?
I am not up on my philosophical literature (trying to change this), but I think most analytic philosophers have heard of Pearl et al. by now. Not every analytic philosopher is as mathematically sophisticated as e.g. people at the CMU department. But I think that's ok!
I don't think it's a wise social move for LW to beat on philosophers.
Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)
Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches? Does there exist a reasonably neutral high-level summary of the field?
On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the 'penetration rate' is path-dependent (that is, depends on the history of the field, personalities involved, etc.)
To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).
I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that's fun to argue about). People model it in lots of ways, I will try to give a rough taxonomy, and will tell you where Pearl lies
Interventionist vs non-interventionist
Most modern causal inference folks are interventionists (including Pearl, Rubin, Robins, etc.). The 'Nicene crede' for interventionists is: (a) an intervention (forced assignment) is key for representing cause/effect, (b) interventions and conditioning are not the same thing, (c) you express interventions in terms of ordinary probabilities using the g-formula/truncated factorization/manipulated distribution (different names for the same thing). The concept of an intervention is old (goes back to Neyman (1920s), I think, possibly even earlier).
To me, non-interventionists fall into three categories: 'naive,' 'abstract', and 'indifferent.' Naive non-interventionists are not using interventions because they haven't thought about things hard enough, and will thus get things wrong. Some EDT folks are in this category. People who ask 'but why can't we just use conditional probabilities' are often in this set. Abstract non-interventionists are not using interventions because they have in mind some formalism that has interventions as a special case, and they have no particular need for the special case. I think David Lewis was in this camp. Joe Halpern might be in this set, I will ask him sometime. Indifferent non-interventionists operate in a field where there is little difference between conditioning and interventions (due to lack of interesting confounding), so there is no need to model interventions explicitly. Reinforcement learning people, and people who only work with RCT data are in this set.
Counterfactualists vs non-counterfactualists
Most modern causal inference folks are counterfactualist (including Pearl, Rubin, Robins, etc.). To a counterfactualist it is important to think about a hypothetical outcome under a hypothetical intervention. Obviously all counterfactualists are interventionist. A noted non-counterfactualist interventionist is Phil Dawid. Counterfactuals are also due to Neyman, but were revived and extended by Rubin in the 70s.
Graphical vs non-graphical
Whether you like using graphs or not. Modern causal inference is split on this point. Folks in the Rubin camp do not like graphs (for reasons that are not entirely clear -- what I heard is they find them distracting from important statistical modeling issues (??)). Folks in the Pearl/SGS/Robins/Dawid/etc. camp like graphs. You don't have to have a particular commitment to any earlier point to have an opinion on graphs (indeed lots of graphical models are not about causality at all). In the context of causality, graphs were first used by Sewall Wright for pedigree analysis (1920s). Lauritzen, Pearl, etc. gave a modern synthesis of graphical models. Spirtes/Glymour/Scheines and Pearl revived a causal interpretation of graphs in the 90s.
"Popperians" vs "non-Popperians"
Whether you restrict yourself to testable assumptions. Pearl is non-Popperian, his models make assumptions that can only be tested via a time machine or an Everett branch jumping algorithm. Rubin is also non-Popperian because of "principal stratification." People that do "mediation analysis" are generally non-Popperian. Dawid, Robins, and Richardson are Popperians -- they try to stick to testable assumptions only. I think even for Popperians, some of their assumptions must be untestable (but I think this is probably necessary for statistical inference in general). I think Dawid might claim all counterfactualists are non-Popperian in some sense.
I am "a graphical non-Popperian counterfactualist" (and thus interventionist).
We are working on it.
Are you aware of any attempts to assign a causality(-like?) structure to mathematics?
There are certainly areas of mathematics where it seems like there is an underlying causality structure (frequently orthogonal or even inverse to the proof structure), but the probability based definition of causality fails when all the probabilities are 0 or 1.
Can you give a simple example of/pointer to what you mean?
Well, in analytic number theory, for example, there are many heuristic arguments that have a causality like flavor; however, the proofs of the statements in question are frequently unrelated to the heuristics.
Also, this is a discussion about the causal relationship between a theorem and its proof.
I don't know much about analytic number theory, could you be more specific? I didn't follow the discussion you linked very well, because they say things like "Pearlian causality is not counterfactual", or think that there is any relationship between implication and causation. Neither is true.
I believe that the things I do at any given time are reasonable for me to do, AMA.
I'm a Research Associate at MIRI. I became a supporter in late 2005, then contributed to research and publication in various ways. Please, AMA.
Opinions I express here and elsewhere are mine alone, not MIRI's.
To be clear, as an Associate, I am an outsider to the MIRI team (who collaborates with them in various ways).
What probability would you assign to this statement: "UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years."
I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I'll rephrase a little.
I estimate the chances that some AGI (in the sense of "roughly human-level AI") will be built within the next 100 years as 85%, which is shorthand for "very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up."
And "100 years" here is shorthand for "as far off as we can make reasonable estimates/guesses about the future of humanity"; perhaps "50 years" should be used instead.
Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for "by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly."
Thank you. I didn't phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.
Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today's knowledge levels?
One-tenth the time seems like a good estimate.
My question is similar to the one that Apprentice posed below. Here are my probability estimates of unfriendly and friendly AI, what are yours? And more importantly, where do you draw the line, what probability estimate would be low enough for you to drop the AI business from your consideration?
Even a fairly low probability estimate would justify effort on an existential risk.
And I have to admit, a secondary, personal, reason for being involved is that the topic is fascinating and there are smart people here, though that of course does not shift the estimates of risk and of the possibilities of mitigating it.
When do you estimate that MIRI will start writing the code for a friendly AI?
Median estimate for when they'll start working on a serious code project (i.e., not just toy code to illustrate theorems) is 2017.
This will not necessarily be development of friendly AI -- maybe a component of friendly AI, maybe something else. (I have no strong estimates for what that other thing would be, but just as an example--a simulated-world sandbox).
Everything I say above (and elsewhere), is my opinion, not MIRIs. Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.
What are the error bars around these estimates?
The first estimate: 50% probability between 2015 and 2020.
The second estimate: 50% probability between 2020 and 2035. (again, taking into account all the conditioning factors).
Um.
The distribution is asymmetric for obvious reasons. The probability for 2014 is pretty close to zero. This means that there is a 50% probability that a serious code project will start after 2020.
This is inconsistent with 2017 being a median estimate.
We're so screwed, aren't we?
Your published dissertation sounds fascinating, but I swore off paper books. Can you share it in digital form?
Sure, I'll send it to you. If anyone else wants it, please contact me. I always knew that Semitic Noun Patterns would be a best seller :-)
What do you think is the liklihood of AI boxing being successful and why (interested in reasons, not numbers).
I don't think I have anything to say that hasn't been said better by others in MIRI and FHI, but I think that AI boxing is impossible because (1) it can convince any gatekeepers to let it out and (2) any AI is "embodied" and not separate from the outside world if only in that its circuits pass electrons, and (3) I doubt you could convince all AGI reseachers to keep their projects isolated.
Still, I think that AI boxing could be a good stopgap measure, one of a number of techniques that are ultimately ineffectual, but could still be used to slightly hold back the danger.
I've talked to a former grad student (fiddlemath, AKA Matt Elder) who worked on formal verification, and he said current methods are not anywhere near up to the task of formally verifying an FAI. Does MIRI have a formal verification research program? Do they have any plans to build programming processes like this or this?
Ask me anything. Like Vulture, I reserve the right to not answer.
Is your button business really functioning, do you get a nontrivial number of orders? What do your buttons look like and why isn't there a single picture of one on your website?
It's still functioning to some extent-- I'll be at Arisia next weekend. As far as I can tell, I'm neglecting the website because of depression and inertia.
Images of buttons
I'm an unemployed legally blind mostly white American who may have at one point been good at math and programming, who is just smart enough to get loads of spam from MIT, but not smart enough to avoid putting my foot in my mouth an average of monthly on Lesswrong. I've been talking about blindness-related issues a lot over the past year mostly because I suddenly realized that they were relevant, but my aim is to solve these problems as quickly as possible so I can get back to getting better at things that actually matter. On the off chance that you have questions, feel free to AMA.
How blind are you, in layman terms of what you can/can't see? What's your prognosis?
I'm not-quite completely blind; what little vision I have tends to fluctuate between effectively nonexistent and good enough to notice vague details maybe once or twice a year. I could see better up until I was 14, but my vision was still too poor to get out of using braille and a cane (given thick glasses and enough time, I could possibly have read size 20 font; even with the much larger font used in movie subtitles, I had to pause the video and put my face against the screen to read them).
I don't know my official acuity/diagnoses (It's been a few years since I saw an eye doctor), but I appear to have started out with retinal detachment and scarring, and later developed uveitis. The latter seems to be the primary cause for the dramatic decline starting from age 14.
Are these problems likely to be correctable/improvable with medicine, but you have no money/insurance to get medical help? Or are they of a kind that basically can't be helped, and that's why you haven't been to a doctor in years? Or is it something else?
Do you use a reader program to browse the web and this site? Do you touch-type or dictate your comments?
(I realize that my questions are callous; please feel free to ignore if they're too invasive)
The retinal issues are unlikely to be fixable in the immediate future (though the latest developments on that front seem potentially promising). There may be a treatment for the more annoying issue, but I don't know if it's too late/what I should do to learn more, and so I'm waiting until life in general is more favorable to dig into it further. (Which I expect means I'll be putting it off until 2015, since I expect to be fairly occupied during most of 2014.)
For using the internet/computers in general, I use Nonvisual Desktop Access, a free screen reader which only recently attained comparable status to Jaws for Windows, which I'd been using prior to 2011. These work well with plaintext, and have trouble with certain types of controls/labels and images and such (I had to Skype someone a screenshot to get past the CAPCHA to register here. I was using a trial of a CAPCHA-solving add-on at the time, but it was unable to locate the CAPCHA on Lesswrong.). Since NVDA is open source, users frequently develop useful add-ons and plugins, such as a CPU usage monitor and the ability to summon a Google Translation of copied text with a single keystroke. (It supposedly includes an optical character recognition feature, but I've never figured out how to use it.).
I touch-type. I'm not much of a fan of dictation, though I'm not sure why.
Why is that? No healthcare policy? It seems that you have good reason to frequent an eye-doctor.
Most of my medical everything is handled by my parents, who are unlikely to do anything unless it is brought to their attention (though sometimes they do ask to make sure nothing's quietly going horribly wrong). My vision was awful enough when last I went, and the doctor only aware of a full-on bionic eye as a possible method for improvement, and what little I had left vulnerable enough to damage/severe discomfort from the sorts of things needed to examine my eyes (holding them open and shining a light in, basically) that it's mostly stopped being worth it.
I did discover a possible treatment for my specific condition recently. I am unsure as to if it would be of much value with my vision as it currently is, but it's something I aim to look into further when I've sorted out enough of this basic life stuff.
Why do you say "may have at one point been good at math and programming." Aren't you still good at that? Are opportunities for people like yourself -- blind, but with those aptitudes --, available in today's world, where so much is done in front of a computer screen, and adaptive technologies exist? Or do you think that in a competitive world, blindness puts you hopelessly behind sighted people?
Do you think that your level of ambition and drive are lessened by your disability, increased, or does it make no difference?
Does the CfAR-style philosophy of instrumental rationalism help you overcome your disability?
Standard economics question: have you considered accepting lower pay?
Yes.
You can ask me things if you like. At Reddit, some of the most successful AMAs are when people are asked about their occupation. I have a PhD in linguistics/philology and currently work in academia. We could talk about academic culture in the humanities if someone is interested in that.
Can you talk about your specific field in linguistics/philology? What it is, what are the main challenges?
Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?
I've mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word "change" when it is inflected?). To preserve some anonymity (though I am not famous) I'd rather not get too specific.
There are lots of little problems I'm interested in for their own sake but perhaps the meta-problems are of more interest here. Those would include getting people to accept that we can actually solve problems and that we should try our best to do so, Many scholars seem to have this fatalistic view of the humanities as doomed to walk in circles and never really settle anything. And for good reason - if someone manages to establish "p" then all the nice speculation based on assuming "not p" is worthless. But many would prefer to be as free as possible to speculate about as much as possible.
Yes. I think the Chomskyan approach is based on a fundamentally mistaken view of cognition, akin to "good old fashioned artificial intelligence". I hope to write a top-level post on this at some point. But I'll say this for Chomsky: He's not a walk-around-in-circles obscurantist. He's a resolutely-march-ahead kind of guy. A lot of the marching was in the wrong direction, but still, I respect that.
That's quite a broad field to plow! I'll keep asking questions, feel free to ignore those that are too specific/boring.
I've always wanted to know more about how authorship attribution is done; is this, found with a quick search, a reasonable survey of current state of the art, or perhaps you'd recommend something else to read?
Are your fields, and humanities in general, trying to move towards open publishing of academic papers, the way STEM fields have been trying to? As someone w/o a university affiliation, I'm intensely frustrated every time I follow an interesting citation to a JSTOR/Muse page.
Do you plan to stay in academia or leave, and it the latter, for what kind of job?
I think you should write that post about the Chomskyan approach.
The Stamatatos survey you linked to will do fine. The basic story is "back in the day this stuff was really hard but some people tried anyway, then in 1964 Mosteller and Wallace published a landmark paper showing that you really could do impressive stuff, then along came computers and now we have a boatload of different algorithms, most of which work just great". The funny thing about stylometry is that it is hard to get wrong. Count up anything you like (frequent words, infrequent words, character n-grams, whatever) and use any distance measurement you like and odds are you'll get usable results. If you want to play around with this for yourself you can install stylo and turn it loose on a corpus of your choice. Gwern's little experiment is also a good read.
My involvement with stylometry has not been to tweak the algorithms (they work just fine) but to apply them in some particular cases and to try to convince my fellow scholars that technological wizardry really can tell them things worth knowing.
Yes. Essentially every scholar I know is in favor of this. As far as I can see, It will happen and is happening.
I worked as an engineer for a few years but found I wasn't that into it and really missed school. So I went back and I'd like to stay.
I would be extremely interested in your post on Chomsky. I almost but not quite majored in linguistics in America, which meant that I got the basic Chomskyan introduction but never got to the arguments against it. I am vaguely familiar with the probabilistic-learning models (enough to get why Chomsky's proof that they can't work fails), but not enough to get what predictions they make etc.
Here I am.
Is it difficult being too smart and concerned about the right things where you live/lived? If yes, how you deal/dealt with it?
Well, it is sometimes difficult to be me, but I'm not sure how much of that is caused by being smart, how much by lack of some skills, and how much is simply the standard difficulty of human life. :D
Seems to me that most people around me don't care about truth or rationality. Usually they just don't comment on things outside of their kitchens; unless they are parrotting some opinion from a newspaper or a TV. That's actualy the less annoying part; I am not disappointed because I didn't expect more from them. More annoying are people who try to appear smart and do so basicly by optimizing for signalling: they repeat every conspiracy theory, share on facebook every "amazing" story without bothering to google for hoaxes or just use some basic common sense. When I am at Mensa and listen to people discussing some latest conspiracy theory, I feel like I might strangle them. Especially when they start throwing around some fully general arguments, such as: You can't actualy know anything. They use their intelligence to defeat themselves. Also, I hate religion. That's a poison of the mind; an emotional electric fence in a mind that otherwise might have a chance to become sane. -- But I suspect all countries are like this, in general. And I am lucky to live in one where people won't try to hurt me just because I say something blasphemous. Still, as is obvious from this paragraph, I feel greatly frustrated about the sanity waterline here.
Okay, specifically for Slovakia: This country used to be mostly Catholic, then it was Communist for a few decades, now it's going back to catholicism again. During the communism, the Catholics were pretty successful in recruiting many contrarians to their ranks; they pretty much told them that the search for truth is the search for God, and they associated atheism with communism (which wasn't difficult at all, since Communists used it as an applause light). I was frustrated by seeing people around me look for the truth in the supernatural, and dismissing the reality almost as a propaganda. Then there was a higher level of contrarians who dismissed also the local religion, and instead embraced buddhism or whatever. Believing in "mere reality" does not work as a signal for intelligence here.
I actually don't have a good way for dealing with it. Some time I was alone. Some time I was friendly with religious people, politely participating in their rituals, believing none of that, but enjoying the company of smart contrarians. Once or twice I tried to find some reason in Mensa, always horribly disappointed.
As a child, I was a member of the mathematical club; elementary-school students who loved math and did the mathematical olympiad. That was the best part of my life; smart activities, and no bullshit. But as we grew older, the club dissolved. -- Skip almost two frustrating decades and I found LessWrong. And I was like: "Smart and sane people again!" and "Oh shit, why do they have to be on the other side of the planet?" And since then I am trying to build a local rationalist movement, progressing very very slowly.
One thing that keeps me sane is my current girlfriend, who also reads LessWrong, and attended a CFAR minicamp with me. But she is not as enthusiastic about it as I am; and she seems to prefer good relationships with other people to being right. Maybe I am just a horrible person unable to deal with people, but the thing is I am unable to unsee the bullshit; when someone speaks bullshit, it's like a painful shrieking sound in my ears, I just can't ignore it; I can keep quiet but it still feels unpleasant.
I suspect most rational people around me cope by focusing their energy into their favorite project, and ignoring the insanity of the rest of the world. (But I may be wrong at modelling other people.) They probably can be rational in their work, and social in the rest of their lives. Maybe they are happy like that. Maybe they just don't know they could expect more (if I didn't have the unique experience of the mathematical club and of LW, probably neither would I). So this year I try to make a list of smart and sane people around me, get in contact with each of them, invite them to a local LW meetup, and give them a copy of my translation of Sequences. -- I am not sure how much should I push the LW; whether having a club of smart and sane people couldn't be enough. For me, LW is simply one level more meta: before LW I approximately knew what was and what wasn't rational, but I didn't have any arguments to win a debate. It was like a matter of feeling: this seems like a correct way to approach truth, and this feels like a way to madness. I just had the general idea that the reality is out there, and that the proper way to grasp it is to adjust my map to the territory, not the other way round. (Because that's what worked for me in mathematics.) -- Maybe my role here is to join the local smart people together. But maybe I am just projecting my desires onto them, and they are actually quite happy as they are now. This will be resolved experimentally.
I look a look at Mensa sometime in the 80s in the US, mostly through their publications. I was very underwhelmed-- they had a very bad habit of coming up with a set of plausible-sounding definitions and basing an argument on them.
I went to an event, and I could get at least as good conversation at a science fiction convention.
On the other hand, one of my friends, an intelligent person, was very fond of DC area Mensa, and it doesn't surprise me if there's a lot of local variation. I also know another very smart person who's also very fond of Mensa. Perhaps it's not a coincidence that she also lives in the DC area.
If the best company you've found was a math club, perhaps you should be looking for mathematicians and/or math clubs.
I suspect that local Mensas are different. But I also think that none of them even approaches the LW level. Maybe it's a question of size -- if you have say 100 Mensans in one city, 10 of them can be rational and have a nice talk together, aside from the rest of the group. If you only have 10 Mensans in one city, you are out of luck there.
The mathematician club I was in as a child was one of a kind; and the lady who led it doesn't do this anymore. She has her own children now, and she works as a coordinator of correspondence competitions; which is not the same thing as having a club. Unfortunately, there was no long-term plan... If I could somehow restart this thing, I would try something like Scouts do (okay, I don't know much details about Scouts, but this is my impression); I would encourage some members to become new leaders, so that the whole thing does not fall apart when the main person no longer has time; I would try to make a self-reproducing system.
There is an interesting background of that mathematical club. It started with a Czech elementary-school teacher of mathematics, Vít Hejný, who taught himself from books some psychology of Piaget and based on this + his knowledge of math + some experimenting in education he developed his own method of teaching matematics. He later taught it to a group of interested students; one of them was the lady who organized my club. But until recently, there was no book explaining the concepts. And even with the book, this man was a psychology autodidact, so he invented a lot of unusual words to describe the concepts he used, so it would be difficult to read for someone without a first-hand experience. And most of the psychologists wouldn't grok the mathematical aspect of the thing, because it is a theory of "how people think when they think about mathematical problems". So I am afraid the whole art will be forgotten. (Perhaps unless someone translates his book to English, substituting his neologisms with the proper psychological terminology, if there are exact equivalents.)
Also, that mathematical club had some "kalokagathia" aspects; we did a lot of sport, or logical debates. That's not the same thing as mathematicians working alone, or math students spending their free time on facebook. Sometimes I think the math (on the olympiad level) simply worked as a filter for high-quality people-- selecting both for intelligence and a desire to become stronger. I am not aware of any math club existing in my city, but people doing math competitions could be the proper group. I just need to make them meet at one place.
Why do you live in Slovakia?
I was born here, and I never lived anywhere else (longer than two weeks). I dislike travelling, and I feel uncomfortable speaking another language (it has a cognitive cost, so I feel I sound more stupid than I would in my language). Generally, I dislike changes -- I should probably work on that, but this is where I am now.
I could also provide some rationalization... uhh, I have friends here, I am familiar with how the society works here, maybe I prefer being a fish in a smaller pond -- okay the last one is probably honest, too.
I work as a software engineer, married with two kids, live in Israel and blog mostly in Russian. AMA.
Why do you even waste time on lj-russians? The level of the discourse is lagging roughly two hundred years behind the western world.
I am not interesting, but I've been here a few years.
Are there interesting reasons that some LW regulars feel disdain for RationalWiki, besides RW's unflattering opinion of LW/EY? Can you steelman that disdain into a short description of what's wrong with RW, from their point of view? (I'm asking as someone basically unfamiliar with RW).
I think the main reason is that basically nobody in the wider world talks about LW, and RW is the only place that talks about LW even that much. And RW can't reasonably be called very interested in LW either (though many RW regulars find LW annoying when it comes to their attention). Also, we use the word "rational", which LW thinks of as its own - I think that's a big factor.
From my own perspective: RW has many problems. The name is a historical accident (and SkepticWiki.com/org is in the hands of a domainer). Mostly it hasn't enough people who can actually write. It's literally not run by anyone (same way Wikipedia isn't), so is not going to be fixed other than organically. Its good stuff is excellent and informative, but a lot of it isn't quite fit for referring outside fresh readers to.
It surprises me how popular it is (as in, I keep tripping over people using a particular page they like - Alexa 21,000 worldwide, 8800 US - and Snopes uses us a bit) - it turns out there's demand for something that can set out "no, actually, that's BS and here's why, point for point". Raising the sanity waterline does in fact also involve dredging the swamps and cleaning up toxic waste spills. Every time we have a fundraiser it finishes ridiculously quickly ('cos our expenses are literally a couple thousand dollars a year). We have readers who just love us.
On balance, though, I do think RW makes the world a better place rather than a worse one. (Or, of course, I wouldn't bother.)
FWIW, there's a current active discussion on What RW Is For, which I expect not to go anywhere much.
I'm not sure I could reasonably steelman LW opposition to RW as if either were a monolith and there were no crossover (which simply isn't the case). I will note that RW is piss-insignificant, and if you're spending any time whatsoever worrying what RW thinks of LW then you're wasting precious seconds.
(The discussion of RW on LW actually came up on the LW and RW Facebook groups this morning too.)
Because RW sucks at actually being rational. Rather they seem to have confused being "rational" with supporting whatever they perceive to be the official scientific position. Whereas LW has a number of contrarian positions, most notably cryonics and the Singularity, where it is widely believed the mainstream position is likely wrong and their argument for it is just silly.
Back when you joined Wikipedia, in 2004, many articles on relatively basic subjects were quite deficient and easily improved by people with modest skills and knowledge. This enabled the cohort that joined then to learn a lot and gradually grow into better editors. This seems much more difficult today. Is this a problem and is there any way to fix it? Has something similar happened with LessWrong, where the whole thing was exciting and easy for beginners some years ago but is "boring and opaque" to beginners now?
My answer may be a bit generic :-)
Re: Wikipedia - This is pretty well-trodden ground, in terms of (a) people coming up with explanations (b) having little evidence as to which of them hold. There's all manner of obvious systemic problems with Wikipedia (maybe the easy stuff's been written, the community is frequently toxic, the community is particularly harsh to newbies, etc) but the odd thing is that the decline in editing observed since 2007 has also held for wikis that are much younger than English Wikipedia - which suggests an outside effect. We're hoping the Visual Editor helps, once it works well enough (at present it's at about the stage of quality I'd have expected; I can assure you that everyone involved fully understands that the Google+-like attempt to push everyone into using it was an utter disaster on almost every level). The Wikimedia Foundation is seriously interested in getting people involved, insofar as it can make that happen.
As for LessWrong ... it's interesting reading through every post on the site (not just the Sequences) from the beginning in chronological order - because then you get the comments. You can see some of the effect you describe. Basically, no-one had read the whole thing yet, 'cos it was just being written.
I'm not sure it was easier for beginners at all. Remember there was only "main" for the longest time - and it was very scary to write for (and still is). Right now you can write stuff in discussion, or in various open threads in discussion.
Thank you. You brought up considerations I hadn't considered.
I am asking everybody here.
Do you have a plan of your own, to ignite the Singularity, the Intelligence explosion, or whatever you want to call it?
If so, when?
How?
I have a plan. Posts here have convinced me that the singularity will most likely be a lose condition for most people. So I'll only activate my plan if I think other actors are getting close.
No, not by myself. Wouldn't have the skillset for it, anyways. So I only try to introduce people to things like MIRI, to improve the chances that future discussions might not stop dead in fatalistic and nihilistic clichés. Effective altruism is an angle where I try to get a sense if a worthwhile elaboration is possible, as steering the arguments is somewhat easier when not starting with the most crazy stuff first.
In case anyone has question for myself I"m happy to answer.
What is the philosophy behind your prolific commenting?
In general online commenting is something I do out of habit. Higher return on time than completely passive media consumption such as watching TV but not that I book under time spent with maximum returns.
I generally think that a shift to massive information consumption of content via TV/radio in the 20st century was something that's bad for the general discourse of ideas in society. Active engagment helps learning.
I also prefer it over chatting in venues such as IRC, because it provides it provides deeper engagement with ideas and leaves more of a footprint. Created content is findable afterwards.
Lesswrong is also a choice to keep me intellecutally grounded. These days I do spent plenty of time thinking in mental frameworks that are not based on reductionist materalism. I do see value of being pretty flexible about changing the map I use to navigate the world and I don't want to lose access to the intellectual way of thinking.
In total I however spent more time than optimal on LW and frequently use it to procrastinate on some other task.
Sure, ask me if you want. Programmer/anime fan/LW reader and commenter.
Are you that lmm?
Yes
What's your favorite anime, and why?
Wandering Son (Hōrō Musuko)
Personal reasons: the story's relevant to my own and in a genre I don't normally pay much attention to, which might be why it stands out over other possible candidates (e.g. Puella Magi Madoka☆Magica). Also, by choosing an artsy show that tackles a serious dramatic subject, full of tragedy (and qbrfa'g erfbyir rirelguvat arngyl at the end), I sound more intellectual.
Psuedo-objective reasons: I feel it accurately captures the feelings of childhood and growing up. I particularly liked the portrayal of the sibling relationship, where you hate each other on a level that's superficial but no less genuine for that, but will stand by each other when you discover things the other really cares about. The conclusion also felt very true-to-life. I liked the visual style; the character designs are much more realistic than the animé norm (and for viewers who find it hard to tell them apart, serve as a demonstration of the valid reasons for the animé norm), and the whole setting and story feels like something you could do in live action. But at the same time this would be completely impossible to produce in live action, for a different reason than normal (child actors and ethical issues), so it shows off the ability of animé to do what other media can't. The slightly washed-out, watercolour visual style is distinctive, even among animé - but it's like that for a reason, the uncertain, blurry visuals aligning perfectly with the emotions this series is trying to convey. Likewise the light, childish-sounding soundtrack is distinctive - but it's not just style for the sake of style, it fits with the show as a whole.
Practical notes: I prefer the 11-episode (rather than 12-episode) release. I've avoided describing the premise because it's an episode 1 spoiler; if you think you'd like the show from this description I recommend watching it (or at least watching episode 1) rather than seeking out more information.
I'll answer anything that will not affect negatively my academic career or violates anyone's privacy but mine (I never felt like I had one). I waive my right not to answer anything else that could be useful to anyone. I'm finishing a master’s on ethics of human enhancement in Brazil, and have just submitted an application for a doctorate in Oxford about moral enhancement.
Sure. I run a Software Dev Shop called Purple Bit, based in Tel Aviv. We specialise in building Python/Angular.js webapps, and have done consulting for a bunch of different companies, from startups to large businesses.
I'm very interested in business, especially Startups and Product Development. Many of my closest friends are running startups, I used to run a startup, and I work with and advise various startups, both technically and business-wise.
AMA, although I won't/can't necessarily answer everything.
You can ask me anything.
Okay, I'll bite. Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI's position is extreme.
Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.
There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Personally, most of the time, I alternate between position 3 and 4.
Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.
Extensive and baseless fear-mongering might very well cause MIRI's value to be overall negative.
How should I fight a basilisk?
Every basilisk is different. My current personal basilisk pertains measuring my blood pressure. I have recently been hospitalized as a result of dangerously high blood pressure (220 systolic, mmHg / 120 diastolic, mmHg). Since I left the hospital I am advised to measure my blood pressure.
The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.
Should I stop measuring my blood pressure because the knowledge hurts me or should I measure anyway because knowing it means that I know when it reaches a dangerous level and thus requires me to visit the hospital?
Do you do any sort of meditation?
No. Do you have any recommendations on what to read/try? Given the side effects of anxiety disorder medications such as pregabalin, meditation was one of the alternatives I thought about besides marijuana.
I have a bunch of recommendations, but I'm no expert.
Generic advice: sit or stand with your back straight and unsupported. If sitting, your knees should be below your hips. This means straight chair (soles of feet on the ground), cross-legged on a cushion, or full lotus.
Pay attention to something low-stress. Your breath (possibly just the feeling of it going in and out of your nostrils), a candle flame, your heart beat (if low stress), counting from one to four and back again.
20 minutes is commonly recommended, but I don't think it's crazy to work up from 5 or 10 minutes if 20 is intolerable.
Meditation isn't easy. One of the useful parts of the training is gently putting your attention back where you want it when you notice you're thinking about something else. It may help to have a few simple categories like thought, memory, imagination, sensation to just label thoughts as they go by.
I recommend The Way of Energy by Lam Kam Chuen-- it's an introduction to Daoist meditation (mosly standing). I'm not going to say it's the best ever (I haven't investigated the field), but it's got a good reputation and I've gotten good results from it.
There. Now that I've said some things, I predict that other meditators will come in with more advice.
Measure every hour. Or every ten minutes. Your hormonal system can't sustain the panic state for long, plus seeing high values and realizing that you are not dead yet will desensitize you to these high values.
As someone who's had both high blood pressure and excessive worrying — I second this advice.
I'm heavily interested in instrumental rationality -- that is, optimizing my life by 1) increasing my enjoyment per moment, 2) increasing the quantity of moments, and 3) decreasing the cost per moment.
I've taught myself a decent amount and improved my life with: personal finance, nutrition, exercise, interpersonal communication, basic item maintenance, music recording and production, sexuality and relationships, and cooking.
If you're interested in possible ways of improving your life, I might have direct experience to help, and I can probably point you in the right direction if not. Feel free to ask me anything!
Do you use any quantitative self tools for this? If so, could you elaborate on your data tracking/analysis processes?
Yes, but incompletely. I'll track things precisely until a habit is established, at which point I stop tracking everything and check-in every once in a while to make sure I'm still on track. Some things I keep track of consistently, such as my budget, weight lifting numbers, bodyweight, etc.
The process is different for different things. I usually start with a Google Drive spreadsheet, and then experiment with other more specific apps if they're better than spreadsheets (they rarely are). If you have any more specific questions, I'd be glad to answer them.
Ask me almost anything. I'm very boring, but I have recovered from depression with the help of CBT + pills, am a lurker since back from the OB days and know the orthodoxy here quite well, started to enjoy running (real barefoot if >7 degrees Celsius) after 29 years of no physical activity, am chairman of the local hackerspace (software dev myself, soon looking for a job again), and somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).
What's your motivation for veganism?
What do you enjoy most in software development, and why are you going to be looking for a job again soon? What's your dream SW dev job?
Moral reasons. All else equal, I think that inflicting pain or death is bad, and that the ability to feel pain and the desire to not die is very widespread. I also think that the intensity of pain in simpler animals is still very strong (I think humans did not evolve large brains because otherwise the pain was not strong enough). I also think that our ability to manage pain slighly reduces the impact of our having the ability to suffer more strongly and with more variety. But I give, for sanity check reasons, priority to the desires of "more complex" animals, like humans.
Due to our technical ability we can now produce supplements for micronutrient which are missing or insufficently available in plants[1], and so I see health concerns resolved. So all the pain and death that I would inflict would only be there for greated enjoyment of food. Although I love the taste of meat and animal products, the comparative enjoyment is not big enough that I would kill for it. That I can enjoy plant-based foods is partly based upon my not being afraid of using my kitchen, and having a good vegan/vegetarian self-service restaurant 100m from my apartment.
And than there are the environmental reasons, and the antibiotic use, etc. etc. They count, and might be even sufficent on their own, but I'll only investigate those in case my other concerns/reasons were invalidated.
[1] There is vegan vit B12, vit D3, EPA/DHA (omega3), and creatin powder.
Cannot really answer what I enjoy most; I like almost every job that comes up, with only a few exceptions. I hate repeating myself, and I hate having to do things in a ... ... ... way against my better judgement. I prefer to work more time (as in effort and calender time) doing the architecture/design/coding parts, but I also prefer doing other stuff once in a while more than being purely a lonely coder.
I will give my notice in a few hours, so I'll than search for a new job. I will have two months time for that, though, and maybe I take some time off before starting in a new company. I'll end this job because neither one of money, project nor team is good enough to make me happy, and the job market for software developers allows for searching for improved conditions.
My dream SW job would involve writing open source software which somehow tangibly improves the lives of some people (think better medical DAq and analysis instead of the newest photo sharing app), working with a team where competence and respect is wide-spread, as is friendlyness, and pay which is not worse than I what I got when I was still failing to drop out of college. Sadly, I do not think such a job exists, especially not for people like me (who do not have the necessary skills for anything fancy).
That's not boring, it is impressive and admirable. Well done.
Thanks!
What steps did you take to start enjoying running?
This was surprisingly simple: I got myself to want to run, started running, and patted myself on the back everytime I did it.
The want part was a bit of luck: I always thought I "should" do some sports, for physical and more importantly mental health reasons, and think that being able to do stuff is better than not being able, ceteris paribus. So I was thinking what kind of activity I might prefer.
I like my alone time (so team- or pair-sports are out), I dislike spending money when I expect it to be wasted (like Gym memberships, bikes, et al.). And I feel easily embarassed and ashamed, and like to get myself at least somewhat up to speed on my own.
Running fits those side requirements. Out of chance I got hold of "Born to Run", and even after the first quarter of the book I thought that it would be great if I could just go out on a bad day and spend an hour free of shit, or how it would be great that I could just reach some location a few kilometers away without any prep or machines or services.
I then decided that I will start running, and that my primary goal shall be that I like it and be able to do it even in old age if such would happen. With the '*' that I give myself an easy way out in case of physical pain or unexpected hatred against the activity, but not for any weasel reasons.
I didn't start running for another one and a half years, because Schweinehund, subtype Innerer. When my mood was getting slightly better (I was again able to do productive work), I started, with the "habit formation" mind-set. Also didn't tell anyone in the beginning. I think it helped that I already had some knowledge on how to train and run correctly, which especially in the beginning meant that I always felt like I could run further than I was "allowed" to.
And for good feedback: However it went, when I finished my training, I "said" to myself: I did good. I feel good. I feel better than before I started. I wrote every single run down on RunKeeper and Fitocracy, and always smiled at the "I'm awesome!" button of the latter one. I'm also quite sure that having at least one new personal best once a week helped. (Also, when you run barefoot, you get the "crazy badass" card for free, however slow you run. I like this.)
Once started, such a feedback loop is quite powerful. When I once barely trained for month, I was also surprised that getting back into regular running after that down-phase was so much easier. Now, after only seven months of training, I went from doing walk/run for 15 minutes to running 75 minutes, and having no problem with a cold-start 6% incline for the first two kilometers. I'm proud. Feels good (is quite new to me).
I'm in grade 12, and know math at about a graduate level. AMA.
What are the top 5 fiction books you've read in your life?
To what extent did you study math on your own initiative? (What kind of support did you have from parents, teachers, and institutions? Were resources readily available, or did you have to work to seek them out?)
I'm a 24-year-old guy looking for a job and have a great interest in science and game design. I read a lot of LW but I rarely feel comfortable posting. I wished there was a LW meetup group in Belgium and when nobody seemed to want to take the initiative I set one up my self. I didn't expect anyone to show, but now, two years later it's still going. Ask me anything you want, but I reserve the right not to answer.
How hard did you find it to be to organize/run a meetup? How did that compare to what you expected?
Self deprecating observations about my knowledge and interestingness, etc, but I have been reading this site for a while. So on the off chance then sure why not, ask me anything
Sure, what the heck. Ask me stuff.
Professional stuff: I work in tech, but I've never worked as a developer — I have fifteen years of experience as a sysadmin and site reliability engineer. I seem to be unusually good at troubleshooting systems problems — which leaves me in the somewhat unfortunate position of being most satisfied with my job when all the shit is fucked up, which does not happen often. I've used about a dozen computer languages; these days I code mostly in Python and Go; for fun I occasionally try to learn more Haskell. I've occasionally tried teaching programming to novices, which is one incredible lesson in illusion of transparency, maybe even better than playing Zendo. I've also conducted around 200 technical interviews.
Personal stuff: I like cooking, but I don't stress about diet; I have the good fortune to prefer salad over dessert. I do container gardening. I've studied nine or ten (human) languages, but alas am only fluent in English; of those I've studied, the one I'd recommend as the most interesting is ASL. I'm polyamorous and in a settled long-term relationship. I get along pretty well with feminists — and think the stereotypes about feminists are as ridiculous as the stereotypes about libertarians. My Political Compass score floats around (1, –8) in the "weird libertarian" end of the pool. I play board games; I should probably play more Go, but am more likely to play more Magic. I was briefly a Less Wrong meetup organizer.
How'd you get to be this way?
I'm not sure, but one of the techniques that seems most salient to me is breadth-first search. Partly this is to hold off on proposing solutions. Take just a little bit longer to look at the problem and gather data before generating hypotheses. The second part is to find cheap tests to disprove your hypotheses instead of going farther down the path that an early hypothesis leads. Folks who use depth-first search, building up a large tree of hypotheses first or going down a long path of possible tests and fixes, seem more likely to get stuck.
I also really like troubleshooting out loud with colleagues who aren't afraid to contradict each other. Generating lots of hypotheses and quickly disconfirming most of them can quickly narrow down on the problem. "Okay, maybe the cause is a bad data push. But if that were so, it would be on all the servers, not just the ones in New York, because the data push logs say the push succeeded everywhere. But the problem's just in New York. So it's not the data push."
What's the best programming language to learn in order to get a job? Or a good job, if the two answers would differ.
(Open question; it's too bad there isn't an "ask everyone who works in tech" thread or somesuch. For background, I used to know Java, as well as BASIC and bits of assembly, but a series of unfortunate chance events distracted me from programming about five years ago and I haven't done any since.)
I sometimes speak English fluently, posses a high school diploma, and live in the great United States of America. If you ask a question, I may answer.
Ask me about parenting.
How do you instill discipline (e.g. don't be mean to your sister, wash you hands after the potty, no jumping on the couch, etc.) without being authoritarian and while maintaining a positive self-image?
Montessori education: Good idea? Bad idea? Fish?
A North American non-Montessori educator (director of daycare) said that Montessori is different in various parts of the world. I did not do more research into this, and obviously this comment can be easily biased and seen to have an agenda. However, based on this comment alone, I'm also interested in whether you (Gunnar_Zarncke) thought about putting your children through (European) Montessori.
Biology/genetics graduate student here, studying the interaction of biological oscillations with each other in yeast, quite familiar with genetic engineering due to practical experience and familiar with molecular biology in general. Fire away.
I have written various things, collected here, including what I think is the second most popular (or at least usually second-mentioned) rationalist fanfiction. I serve dinner to the Illuminati. AMA.
What's the status of Effulgence? I gave up on it soon after it branched out wildly around the Milliways part, and when I checked to see what's going on, there appeared to be no updates in 6 months or so.
Anything else you've written recently that you may recommend?
My coauthor for Effulgence is suffering from an inability to can. It is slowly recovering (today we were able to do a not-in-Effulgence-continuity sandbox thread for a little more than thirty comments, and she's been writing an unrelated short story!) and we are continuing to make plans for what we will write when the ability to can is restored. The last new post was made in November 2013, though, so I'm not sure where you're getting "6 months or so".
I periodically update curious parties about Effulgence behind-the-scenes goings-on in this TV Tropes forum thread which was originally about Elcenia but is now about my stuff in more generality.
I have released two short stories relatively recently, though the latter (AU-fanfiction-of-sorts of Three Worlds Collide) was written back in 2012 and I just sat on it for a while. I have also been writing a series of social justice blog posts for alternate universes which have inspired some entertaining audience participation. I recommend subscribing to my general RSS feed if you are curious about my creative output.
I have less than zero idea how far you got into Effulgence when you describe yourself as dropping it "after it branched out wildly around the Milliways part". But if wild branching and Milliways were turnoffs for you I don't think you're gonna like anything after that mysterious part.
Thanks!
If I recall, I really liked the story as a standalone one, up until the Luminosity Bella showed up. Of course, given the name and the nature of the RP, I should have expected it.
Yeah, there are, um, lots of them. You can read some of their stories before they hit the "peal" as self-contained AUs, if you want - just go to the first instance of a new "symbella" in the index (except for the lower-case omega, that's a special case), and read only posts that have no other symbellas. (Some posts have no symbella and these are usually part of the same story as whatever's closest to them, it just means the relevant Bell isn't present in that particular thread.) These will sometimes cut off kind of awkwardly, of course...
Ah, thanks, I'll give it a try. I was confused about where the stories start.
My impression of Luminosity, after reading it and before reading Radiance, was that it was essentially depicting the usefulness of luminosity more of less entirely by showing vampire-Bella completely losing her luminosity techniques/attitudes. To what degree did you intend this? Do you see it as accurate?
Also what do you think of Syzygy, seven years down the line? (Me(highschool) quite liked it. Me(2014) was very surprised to discover that it was written by someone I encountered again elsewhere.)
My primary interest is determining what the "best" thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that's a good thing but I really don't know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.
Why do you believe that there are god-like beings that interact with humans? How confident are you that this is the case?
I like the idea.
Here we go, things that might be interesting to people to ask about:
born in Kharkov, Ukraine, 1975, Jewish mother, Russian father
went to a great physics/math school there (for one year before moving to US), was rather average for that school but loved it. Scored 9th in the city's math contest for my age group largely due to getting lucky with geometry problems - I used to have a knack for them
moved to US
ended up in a religious high school in Seattle because I was used to having lots of Jewish friends from the math school
Became an orthodox Jew in high school
Went to a rabbinical seminary in New York
After 19 years, accumulation of doubts regarding some theological issues, Haitian disaster and a lot of help from LW quit religion
Mostly worked as a programmer for startups with the exception of Bloomberg, which was a big company; going back to startups (1st day at Palantir tomorrow)
self-taught enough machine learning/NLP to be useful as a specialist in this area
Married with 3 boys, the older one is a high-functioning autistic
Am pretty sure AI issues are important to worry about. MIRI and CFAR supporter
I'm a programmer at Google in Boston doing earning to give, I blog about all sorts of things, and I play mandolin in a dance band. Ask me anything.
.
I wrote a book about a new philosophy of empirical science based on large scale lossless data compression. I use the word "comperical" to express the idea of using the compression principle to guide an empirical inquiry. Though I developed the philosophy while thinking about computer vision (in particular the chronic, disastrous problems of evaluation in that field), I realized that it could also be applied to text. The resulting research program, which I call comperical linguistics, is something of a hybrid of linguistics and natural language processing, but (I believe) on much firmer methodological ground than either. I am now carrying out research in this area, AMA.