I've been getting an increasing number of interview requests from reporters and book writers (stemming from my connection with Bitcoin). In the interest of being lazy, instead of doing more private interviews I figure I'd create an entry here and let them ask questions publicly, so I can avoid having to answer redundant questions. I'm also open to answering any other questions of LW interest here.
In preparation for this AMA, I've updated my script for retrieving and sorting all comments and posts of a given LW user, to also allow filtering by keyword or regex. So you can go to http://www.ibiblio.org/weidai/lesswrong_user.php, enter my username "Wei_Dai", then (when the page finishes loading) enter "bitcoin" in the "filter by" box to see all of my comments/posts that mention Bitcoin.
I was surprised to see, both on your website and the white paper, that you are part of Mercatoria/ICTP (although your level of involvement isn't clear based on public information). My surprise is mainly because you have a couple of comments on LessWrong that discuss why you have declined to join MIRI as a research associate. You have also (to my knowledge) never joined any other rationality-community or effective altruism-related organization in any capacity.
My questions are:
I seem to have missed this question when it was posted.
You have also (to my knowledge) never joined any other rationality-community or effective altruism-related organization in any capacity.
With the background that I have an independent source of income and it's costly to move my family (we're not near any major orgs) so I'd have to join in a remote capacity, I wrote down this list of pros and cons of joining an org (name redacted) that tried to recruit me recently:
Pros
Cons
I received a PM from someone at a Portuguese newspaper who I think meant to post it publicly, so I'll respond publicly here.
You have contacted Satoshi Nakamoto. Does it seem to you only one person or a group of developers?
I think Satoshi is probably one person.
Does bitcoin seem cyberpunk project to you? In that case, can one expect they ever disclose identity?
Not sure what the first part of the question means. I don't expect Satoshi to voluntarily reveal his identity in the near future, but maybe he will do so eventually?
In that case, the libertarian motivation wouldn't be a risk to anyone who invest in the community? Like one this gets all formal and legal, it blow?
Don't understand this one either.
Is it important to know right now its origins? The author from the blog LikeinMirrorr, who says the most probable name is Nick Szabo, argues there is a concern on risk: if Szabo/ciberpunk is the source no risk, but it maybe this bubble - pump-and-dump scheme to enrich its original miners - or a project from federal goverment to track underground transactions. What is your view on this?
I'm pretty sure it's not a pump-and-dump scheme, or a government project.
...Do you also t
What do you think are the most interesting philosophical problems within our grasp to be solved?
I'm not sure there is any. A big part of it is that metaphilosophy is essentially a complete blank, so we have no way of saying what counts as a correct solution to a philosophical problem, and hence no way of achieving high confidence that any particular philosophical problem has been solved, except maybe simple (and hence not very interesting) problems, where the solution is just intuitively obvious to everyone or nearly everyone. It's also been my experience that any time we seem to make real progress on some interesting philosophical problem, additional complications are revealed that we didn't foresee, which makes the problem seem even harder to solve than before the progress was made. I think we have to expect this trend to continue for a while yet.
If you instead ask what are some interesting philosophical problems that we can expect visible progress on in the near future, I'd cite decision theory and logical uncertainty, just based on how much new effort people are putting into them, and results from the recent past.
...Do you think that solving normative ethics won't happen unti
It's Chinese Pinyin romanization, so pronounced "way dye".
ETA: Since Pinyin is a many to one mapping, and as a result most Chinese articles about Bitcoin put the wrong name down for me, I'll take this opportunity to mention that my name is written logographically as 戴维.
I'm a Research Associate at MIRI. I became a supporter in late 2005, then contributed to research and publication in various ways. Please, AMA.
Opinions I express here and elsewhere are mine alone, not MIRI's.
To be clear, as an Associate, I am an outsider to the MIRI team (who collaborates with them in various ways).
Median estimate for when they'll start working on a serious code project (i.e., not just toy code to illustrate theorems) is 2017.
This will not necessarily be development of friendly AI -- maybe a component of friendly AI, maybe something else. (I have no strong estimates for what that other thing would be, but just as an example--a simulated-world sandbox).
Everything I say above (and elsewhere), is my opinion, not MIRIs. Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.
I've talked to a former grad student (fiddlemath, AKA Matt Elder) who worked on formal verification, and he said current methods are not anywhere near up to the task of formally verifying an FAI. Does MIRI have a formal verification research program? Do they have any plans to build programming processes like this or this?
I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I'll rephrase a little.
I estimate the chances that some AGI (in the sense of "roughly human-level AI") will be built within the next 100 years as 85%, which is shorthand for "very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up."
And "100 years" here is shorthand for "as far off as we can make reasonable estimates/guesses about the future of humanity"; perhaps "50 years" should be used instead.
Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for "by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly."
What, if anything, do you think a lesswrong regular who's read the sequences and all/most of MIRI's non-technical publications will get out of your book?
How much time did it take you to write the singularity book? How much money has it brought you?
Same question about your microeconomics textbook. Also, what motivated you to write it given that there must be about 2^512 existing ones on the market?
Hard to say about the time because I worked on both books while also doing other projects. I suspect I could have done the Singularity book in about 1.5 years of full time effort. I don't have a good estimate for the textbook. Alas, I have lost money on the singularity book because the advance wasn't all that big, and I had personal expenses such as hiring a research assistant and paying a publicist. The textbook had a decent advance, still I probably earned roughly minimum wage for it. Surprisingly, I've done fairly well with my first book, Game Theory at Work, in part because of translation rights. With Game Theory at Work I've probably earned several times the minimum wage. Of course, I'm a professor and part of my salary from my college is to write, and I'm not including this.
I wanted to write a free market microeconomics textbook, and there are very few of these. I was recruited to write the textbook by the people who published Game Theory at Work. Had the textbook done very well, I could have made a huge amount of money (roughly equal to my salary as a professor) indefinitely. Alas, this didn't happen but the odds of it happening were well under 50%. Since teaching microeconomics is a big part of my job as a college professor, there was a large overlap between writing the textbook and becoming a better teacher. My textbook publisher sent all of my chapters to other teachers of microeconomics to get their feedback, and so I basically got a vast amount of feedback from experts on how I teach microeconomics.
No. I ran as a Republican in one of the most Democratic districts in Massachusetts, my opponent was the second most powerful person in the Massachusetts State Senate, and even Republicans in my district had a high opinion of him.
I wanted to get more involved in local Republican politics and no one was running in the district and it was suggested that I run. It turned out to be a good decision as I had a lot of fun debating my opponent and going to political events. Since winning wasn't an option, it was even mostly stress free.
Why do you think that it is so hard to get through to people?
Not only you, but others involved in this, and myself, have all found that intelligent people will listen and even understand what you are telling them -- I probe for inferential gaps, and if they exist they are not obvious.
Yet almost no one gets on board with the MIRI/FHI program.
Why?
I have thought a lot about this. Possible reasons: most humans don't care about the far future or people who are not yet born, most things that seem absurd are absurd and are not worth investigating and the singularity certainly superficially seems absurd, the vast majority is right and you and I are incorrect to worry about a singularity, it's impossible for people to imagine an intelligence AI that doesn't have human-like emotions, the Fermi paradox implies that civilizations such as ours are not going to be able to rationally think about the far future, and an ultra-AI would be a god and so is disallowed by most peoples' religious beliefs.
Your question is related to why so few signup for cryonics.
I'm a PhD student in artificial intelligence, and co-creator of the SPARC summer program. AMA.
My primary interest is determining what the "best" thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that's a good thing but I really don't know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.
I believe so for reasons you wouldn't find compelling, because the gods apparently do not want there to be common knowledge of their existence, and thus do not interact with humans in a manner that provides communicable evidence. (Yes, this is exactly what a world without gods would look like to an impartial observer without firsthand incommunicable evidence. This is obviously important but it is also completely obvious so I wish people didn't harp on it so much.) People without firsthand experience live in a world that is ambiguous as to the existence or lack thereof of god-like beings, and any social evidence given to them will neither confirm nor deny their picture of the world, unless they're falling prey to confirmation bias, which of course they often do, especially theists and atheists. I think people without firsthand incommunicable evidence should be duly skeptical but should keep the existence of the supernatural (in the everyday sense of that word, not the metaphysical sense) as a live hypothesis. Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance. (I think LessWrong is es...
I have written various things, collected here, including what I think is the second most popular (or at least usually second-mentioned) rationalist fanfiction. I serve dinner to the Illuminati. AMA.
I'm an unemployed legally blind mostly white American who may have at one point been good at math and programming, who is just smart enough to get loads of spam from MIT, but not smart enough to avoid putting my foot in my mouth an average of monthly on Lesswrong. I've been talking about blindness-related issues a lot over the past year mostly because I suddenly realized that they were relevant, but my aim is to solve these problems as quickly as possible so I can get back to getting better at things that actually matter. On the off chance that you have questions, feel free to AMA.
Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)
On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the 'penetration rate' is path-dependent (that is, depends on the history of the field, personalities involved, etc.)
To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).
Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches?
I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that's fun to argue about). People model it in ...
Ask me almost anything. I'm very boring, but I have recovered from depression with the help of CBT + pills, am a lurker since back from the OB days and know the orthodoxy here quite well, started to enjoy running (real barefoot if >7 degrees Celsius) after 29 years of no physical activity, am chairman of the local hackerspace (software dev myself, soon looking for a job again), and somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).
I didn't think I had anything particularly interesting to offer, but then it occurred to me that I have a relatively rare medical disorder: my body doesn't produce any testosterone naturally, so I have to have it administered by injection. As a result I went through puberty over the age range of ~16-19 years old. If you're curious feel free to AMA.
(also, bonus topic that just came to mind: every year I write/direct a Christmas play featuring all of my cousins, which is performed for the rest of the family on Christmas Eve. It's been going on for over 20 years and now has its own mythology, complete with anti-Santa. It gets more elaborate every year and now features filmed scenes, with multi-day shoots. This year the villain won, Christmas was cancelled for seven years and Santa became a bartender (I have a weird family). It's...kind of awesome? If you're looking for a fun holiday tradition to start AMA)
Biology/genetics graduate student here, studying the interaction of biological oscillations with each other in yeast, quite familiar with genetic engineering due to practical experience and familiar with molecular biology in general. Fire away.
Shortening telomeres are a red herring. You need multiple generations of a mammal not having telomerase before you get premature ageing, and all the research you've heard about where they 'reversed ageing' with telomerase was putting it back into animals that had been engineered to lack it for generations. Plus lack of telomerase in most of your somatic cells is one of your big anti-cancer defenses.
Much more of a problem is things like nuclear pores never being replaced in post-mitotic cells (they're only replaced during mitosis) and slowly oxidizing and becoming leaky, extracellular matrix proteins having a finite lifetime, and all kinds of metabolic dysregulation and protein metabolism issues.
This isn't exactly my field, but there's a few interesting actual lines of research I've seen. One is an apparent reduction in protein-folding chaperone activity with age in many animals from C. elegans to humans [people LOVE C. elegans for ageing studies because they can enter very long-lived quiescent phases in their life cycle, and there are mutations with very different lifespans]. People still aren't quite sure what that means or where it comes from.
There's lots of interest in calor...
The dreaded answer: 'Well, it depends..."
The genetic code - the relationship between base triplets in the reading frame of a messenger RNA and amino acids that come out of the ribosome that RNA gets threaded through – is at least as ancient as the most recent common ancestor of all life and is almost universal. There are living systems that use slightly different codons though – animal and fungal mitochondria, for example, have a varied lot of substitutions, and ciliate microbes have one substitution as well. If you were to move things back and forth between those systems, you would need to change things or else there would be problems.
If you avoid or compensate for those weird systems, you can move reading frames wherever you want and they will produce the same primary protein sequence. The interesting part is getting that sequence to be made and making sure it works in its new context.
At the protein level, some proteins require the proper context or cofactors or small molecules to fold properly. For example, a protein that depends on disulfide bonds to hold itself in the correct shape will never fold properly if it is expressed inside a bacterium or in the cytosol of a ...
I'm a programmer at Google in Boston doing earning to give, I blog about all sorts of things, and I play mandolin in a dance band. Ask me anything.
What are you working on at google?
ngx_pagespeed and mod_pagespeed. They are open source modules for nginx and apache that rewrite web pages on the fly to make them load faster.
How much do you earn?
$195k/year, all things considered. (That's my total compensation over the last 19 months, annualized. Full details: http://www.jefftk.com/money)
How much do you give, and to where?
Last year Julia and I gave a total of $98,950 to GiveWell's top charities and the Centre for Effective Altruism. (Full details: http://www.jefftk.com/donations)
I like the idea.
Here we go, things that might be interesting to people to ask about:
born in Kharkov, Ukraine, 1975, Jewish mother, Russian father
went to a great physics/math school there (for one year before moving to US), was rather average for that school but loved it. Scored 9th in the city's math contest for my age group largely due to getting lucky with geometry problems - I used to have a knack for them
moved to US
ended up in a religious high school in Seattle because I was used to having lots of Jewish friends from the math school
Became an
I understand ancient Greek philosophy really well. In case that has come up. I'm a PhD student in philosophy, and I'd be happy to talk about that as well.
If anyone's interested (ha!), then sure, go ahead, ask me anything. (Of course I reserve the right not to answer if I think it would compromise my real-world identity, etc.)
N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.
Feel free to ask me (almost) anything. I'm not very interesting, but here are some possible conversation starters.
Some LW-folks have in the past asked me questions about my stroke and recovery when it came up, and seemed interested in my answers, so it might be useful to offer to answer such questions here. Have at it! (You can ask me about other things if you want, too.)
I'm a 24-year-old guy looking for a job and have a great interest in science and game design. I read a lot of LW but I rarely feel comfortable posting. I wished there was a LW meetup group in Belgium and when nobody seemed to want to take the initiative I set one up my self. I didn't expect anyone to show, but now, two years later it's still going. Ask me anything you want, but I reserve the right not to answer.
I'm heavily interested in instrumental rationality -- that is, optimizing my life by 1) increasing my enjoyment per moment, 2) increasing the quantity of moments, and 3) decreasing the cost per moment.
I've taught myself a decent amount and improved my life with: personal finance, nutrition, exercise, interpersonal communication, basic item maintenance, music recording and production, sexuality and relationships, and cooking.
If you're interested in possible ways of improving your life, I might have direct experience to help, and I can probably point you in the right direction if not. Feel free to ask me anything!
Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI's position is extreme.
Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.
There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about th
How should I fight a basilisk?
Every basilisk is different. My current personal basilisk pertains measuring my blood pressure. I have recently been hospitalized as a result of dangerously high blood pressure (220 systolic, mmHg / 120 diastolic, mmHg). Since I left the hospital I am advised to measure my blood pressure.
The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.
Should I stop measuring my blood pressure because the knowledge hurts me or should I measure anyway because knowing it means that I know when it reaches a dangerous level and thus requires me to visit the hospital?
The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.
Measure every hour. Or every ten minutes. Your hormonal system can't sustain the panic state for long, plus seeing high values and realizing that you are not dead yet will desensitize you to these high values.
You can ask me things if you like. At Reddit, some of the most successful AMAs are when people are asked about their occupation. I have a PhD in linguistics/philology and currently work in academia. We could talk about academic culture in the humanities if someone is interested in that.
Can you talk about your specific field in linguistics/philology?
I've mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word "change" when it is inflected?). To preserve some anonymity (though I am not famous) I'd rather not get too specific.
what are the main challenges?
There are lots of little problems I'm interested in for their own sake but perhaps the meta-problems are of more interest here. Those would include getting people to accept that we can actually solve problems and that we should try our best to do so, Many scholars seem to have this fatalistic view of the humanities as doomed to walk in circles and never really settle anything. And for good reason - if someone manages to establish "p" then all the nice speculation based on assuming "not p" is worthless. But many would prefer to be as free as possible to speculate about as much as possible.
...Do you have
Sure, what the heck. Ask me stuff.
Professional stuff: I work in tech, but I've never worked as a developer — I have fifteen years of experience as a sysadmin and site reliability engineer. I seem to be unusually good at troubleshooting systems problems — which leaves me in the somewhat unfortunate position of being most satisfied with my job when all the shit is fucked up, which does not happen often. I've used about a dozen computer languages; these days I code mostly in Python and Go; for fun I occasionally try to learn more Haskell. I've occasionally tr...
I am asking everybody here.
Do you have a plan of your own, to ignite the Singularity, the Intelligence explosion, or whatever you want to call it?
If so, when?
How?
I'm a 30-year-old first-year medical student on a full tuition scholarship. I was a super-forecaster in the Good Judgment Project. I plan to donate a kidney in June. I'm a married polyamorous woman.
Why not.
I attended CFAR's may 2013 workshop. I was the main organizer of the London LW group during approximately Nov 2012-April 2013, and am still an occasional organizer of it. I have an undergraduate MMath. My day job is software, I'm the only fulltime programmer on a team at Universal Pictures which is attempting to model the box office. AMAA.
I wrote a book about a new philosophy of empirical science based on large scale lossless data compression. I use the word "comperical" to express the idea of using the compression principle to guide an empirical inquiry. Though I developed the philosophy while thinking about computer vision (in particular the chronic, disastrous problems of evaluation in that field), I realized that it could also be applied to text. The resulting research program, which I call comperical linguistics, is something of a hybrid of linguistics and natural language processing, but (I believe) on much firmer methodological ground than either. I am now carrying out research in this area, AMA.
Are there interesting reasons that some LW regulars feel disdain for RationalWiki, besides RW's unflattering opinion of LW/EY? Can you steelman that disdain into a short description of what's wrong with RW, from their point of view? (I'm asking as someone basically unfamiliar with RW).
I think the main reason is that basically nobody in the wider world talks about LW, and RW is the only place that talks about LW even that much. And RW can't reasonably be called very interested in LW either (though many RW regulars find LW annoying when it comes to their attention). Also, we use the word "rational", which LW thinks of as its own - I think that's a big factor.
From my own perspective: RW has many problems. The name is a historical accident (and SkepticWiki.com/org is in the hands of a domainer). Mostly it hasn't enough people who can actually write. It's literally not run by anyone (same way Wikipedia isn't), so is not going to be fixed other than organically. Its good stuff is excellent and informative, but a lot of it isn't quite fit for referring outside fresh readers to.
It surprises me how popular it is (as in, I keep tripping over people using a particular page they like - Alexa 21,000 worldwide, 8800 US - and Snopes uses us a bit) - it turns out there's demand for something that can set out "no, actually, that's BS and here's why, point for point". Raising the sanity waterline does in fact also involve dredging the swamps and cleaning up to...
I'll answer anything that will not affect negatively my academic career or violates anyone's privacy but mine (I never felt like I had one). I waive my right not to answer anything else that could be useful to anyone. I'm finishing a master’s on ethics of human enhancement in Brazil, and have just submitted an application for a doctorate in Oxford about moral enhancement.
Why did you make this post Will? Wait I guess you didn't comment here volunteering to answer questions.
Anyway I guess I can answer questions but I'm pretty lazy and not very educated so ask at your own risk.
Self deprecating observations about my knowledge and interestingness, etc, but I have been reading this site for a while. So on the off chance then sure why not, ask me anything
Sure. I run a Software Dev Shop called Purple Bit, based in Tel Aviv. We specialise in building Python/Angular.js webapps, and have done consulting for a bunch of different companies, from startups to large businesses.
I'm very interested in business, especially Startups and Product Development. Many of my closest friends are running startups, I used to run a startup, and I work with and advise various startups, both technically and business-wise.
AMA, although I won't/can't necessarily answer everything.
I work as a software engineer, married with two kids, live in Israel and blog mostly in Russian. AMA.
Well, it is sometimes difficult to be me, but I'm not sure how much of that is caused by being smart, how much by lack of some skills, and how much is simply the standard difficulty of human life. :D
Seems to me that most people around me don't care about truth or rationality. Usually they just don't comment on things outside of their kitchens; unless they are parrotting some opinion from a newspaper or a TV. That's actualy the less annoying part; I am not disappointed because I didn't expect more from them. More annoying are people who try to appear smart and do so basicly by optimizing for signalling: they repeat every conspiracy theory, share on facebook every "amazing" story without bothering to google for hoaxes or just use some basic common sense. When I am at Mensa and listen to people discussing some latest conspiracy theory, I feel like I might strangle them. Especially when they start throwing around some fully general arguments, such as: You can't actualy know anything. They use their intelligence to defeat themselves. Also, I hate religion. That's a poison of the mind; an emotional electric fence in a mind that otherwise might have a chance to become sane. -- B...
In the unlikely even that anyone is interested, sure, ask me anything.
Edit: Ethics are a particular interest of mine.
Depends on the situation. Do I have to kill whatever I'm fighting, or do I just have to defend myself? If it's the former, the horse-sized duck, because duck-sized horses would be too good at running away and hiding. If it's the latter, then the duck- horses, because they'd be easier to scatter.
Hello : )
I chatted with Satoshi(he really used his famous nickname in a chat with me) before there was even bitcoin. So I just discovered your site with your articles and have to say that they are impressive or seem to be(I don’t have much clue about these things).
Greetings Anita
I heard something about the B-money and Bitcoin. I'd like to hear your views on the technical aspects of a real decentralized cryptocurrency. I hope it may fulfill all the promises.
See the article at
or https://saintthor.medium.com/
If you can read Chinese, the Chinese version is most recommended at http://guideep.atwebpages.com/acc.html
thanks.
I have been trying to find the origin of the term "b-money". What does the b refer to in Wei Dai's post http://www.weidai.com/bmoney.txt)? The term "b-money" appears first in the Appendix title. Does "b" denote the second method (with "a-money" referring to the first method)? Or was the term b-money already in use referring to digital money, b[it]-money?
Any info appreciated.
Assuming the security risk of growing economic monopolization build in in the dna of proof of work (as well as proof of stake) is going to prevail in the coming years:
Do you think it is possible to create a more secure proof of democratic stake? I know that would require a not yet existing proof of unique identity first. So the question implies also: Do you think a proof of unique identity is even possible?
P.S.: Ideas flowing around the web to solve the later challenge are for example:
I'm a 31-year-old Colombian guy who writes SF in Spanish. I'm a lactovegetarian teetotaler who sympathizes with Theravada Buddhism. My current job is as chief editor at a small publishing house that produces medical literature. My estimate of the existence of one other LWer near my current location (the 8-million-inhabitant city of Bogotá) is 0.01% per every ten kilometers in the radius of search for the first 2500 kilometers of radius (after that distance you hit the U.S., which invalidates this formula). My mother was an angrily devout Catholic and my father was a hopelessly gullible Rosicrucian. Ask me anything not based on stereotypes about Colombians.
I've gotten accustomed to hearing cryonics being described here as the obvious thing to do at the end of your natural life, the underlying assumption apparently being that you'd be hopelessly dumb if you didn't jump at the chance of getting a tremendous potential benefit at a comparatively negligible cost.
So, I have a calibration question for male LWers who come from Jewish families: What is your opinion on foreskin restoration surgery?
I'm an Australian male with strong views on Socialism. I have an interest in modern history and keeping up with international news.
Sure, what the hell. I'm a financial advisor by trade, so ask me questions in that field if you want expert-type answers, but being opinionated and argumentative is my hobby, so ask me anything.
I've worked in high frequency trading in Chicago as a trader and developer for 11.5 years. I am an expert on that stuff. AMA.
I am K. Woomba. I'll answer any question so long as it contains an even number of "a"'s XOR is a question I decide to answer. Also, please no questions about why I skipped work today.
(Nice to see some less active old-timers active in this thread again.)
You can ask me something. I don't promise to answer. If you've never heard of me and want to ask me something anyway, here's some hooks:
I have many opinions on how humans interact with computers and how computers interact with computers; i.e. user interface design, programming language design, networking, and security.
I consider myself to have an akrasia problem but am reasonably successful* in life despite it, for causes which appear to me to be luck or other people's low standards.
* To be more precise, I have money (but man...
Hi Wei. Do you have any comments of the Ethereum, ICO (Initial Coin Offering) and hard forks of Bitcoin? Do you think they will solve the problem of fixed monetary supply of Bitcoin since they somehow brought much more "money" (or securities like stock, not sure how to classify them)?
Do you have any comments about the scaling fight of Bitcoin between larger blocks and 2nd-layer payment tunnels such as Lightning Network ?
I'd like to answer questions about how AI might represent itself as a LW user, or vice-versa.
Ask me anything. Until recently I was a machine learning and text mining graduate student. Now I work on data visualization in a corporate setting (but I can't talk a ton about that).
I'm a computer science researcher, working in systems and software engineering research. I'm particularly qualified to talk about the experience of academic computer science, but AMA.
I sometimes speak English fluently, posses a high school diploma, and live in the great United States of America. If you ask a question, I may answer.
I've read the sequences and have a pretty solid grip on what the LW orthodox position is on epistemology and a number of other issues - anyone need some clarification on any points?
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.