What reaches your attention when you see is not ‘reality’ but a mix of light measurements with cryptotheories that were useful for making snap judgments in the environment of ancestral adaptation.
Eric S. Raymond here: http://esr.ibiblio.org/?p=7076
What reaches your attention when you see is not ‘reality’ but a mix of light measurements with cryptotheories that were useful for making snap judgments in the environment of ancestral adaptation.
Eric S. Raymond here: http://esr.ibiblio.org/?p=7076
We'll be learning and investigating the dynamics of the game 'Double Crux', a potentially useful tool for approximating Aumannian reasoning. We'll play a few rounds and I (lesswrong.com/user/negamuhia) will describe what happened, with my impressions of difficulty, interest and progression based on the members who participate.
A botnet startup. People sign up for the service, and install an open source program on their computer. The program can:
For every quantum of data transferred / calculated, the user earns a token. These tokens can then be used to buy bandwidth/cycles of other users on the network. You can also buy tokens for real money (including crypto-currency).
Any job that you choose to execute on the other users machines has to be somehow verified safe for those users (maybe the users have to be able to see the source before accepting, maybe the company has to authorize it, etc). The company also offers a package of common tasks you can use, such as DDoS, Tor/VPN relays, seedboxes, cryptocurrency mining and bruteforcing hashes/encryption/etc.
Thanks for sharing your contrarian views, both with this post and with your previous posts. Part of me is disappointed that you didn't write more... it feels like you have several posts' worth of objections to Less Wrong here, and at times you are just vaguely gesturing towards a larger body of objections you have towards some popular LW position. I wouldn't mind seeing those objections fleshed out in to long, well-researched posts. Of course you aren't obliged to put in the time & effort to write more posts, but it might be worth your time to fix specific flaws you see in the LW community given that it consists of many smart people interested in maximizing their positive impact on the far future.
I'll preface this by stating some points of general agreement:
I haven't bothered to read the quantum physics sequence (I figure if I want to take the time to learn that topic, I'll learn from someone who researches it full-time).
I'm annoyed by the fact that the sequences in practice seem to constitute a relatively static document that doesn't get updated in response to critiques people have written up. I think it's worth reading them with a grain of salt for that reason. (I'm also annoyed by the fact that they are extremely wordy and mostly without citation. Given the choice of getting LWers to either read the sequences or read Thinking Fast and Slow, I would prefer they read the latter; it's a fantastic book, and thoroughly backed up by citations. No intellectually serious person should go without reading it IMO, and it's definitely a better return on time. Caveat: I personally haven't read the sequences through and through, although I've read lots of individual posts, some of which were quite insightful. Also, there is surprisingly little overlap between the two works and it's likely worthwhile to read both.)
And here are some points of disagreement :P
You talk about how Less Wrong encourages the mistake of reasoning by analogy. I searched for "site:lesswrong.com reasoning by analogy" on Google and came up with these 4 posts: 1, 2, 3, 4. Posts 1, 2, and 4 argue against reasoning by analogy, while post 3 claims the situation is a bit more nuanced. In this comment here, I argue that reasoning by analogy is a bit like taking the outside view: analogous phenomena can be considered part of the same (weak) reference class. So...
Insofar as there is an explicit "LW consensus" about whether reasoning by analogy is a good idea, it seems like you've diagnosed it incorrectly (although maybe there are implicit cultural norms that go against professed best practices).
It seems useful to know the answer to questions like "how valuable are analogies", and the discussions I linked to above seem like discussions that might help you answer that question. These discussions are on LW.
Finally, it seems you've been unable to escape a certain amount of reasoning by analogy in your post. You state that experimental investigation of asteroid impacts was useful, so by analogy, experimental investigation of AI risks should be useful.
The steelman of this argument would be something like "experimentally, we find that investigators who take experimental approaches tend to do better than those who take theoretical approaches". But first, this isn't obviously true... mathematicians, for instance, have found theoretical approaches to be more powerful. (I'd guess that the developer of Bitcoin took a theoretical rather than an empirical approach to creating a secure cryptocurrency.) And second, I'd say that even this argument is analogy-like in its structure, since the reference class of "people investigating things" seems sufficiently weak to start pushing in to analogy territory. See my above point about how reasoning by analogy at its best is reasoning from a weak reference class. (Do people think this is worth a toplevel post?)
This brings me to what I think is my most fundamental point of disagreement with you. Viewed from a distance, your argument goes something like "Philosophy is a waste of time! Resolve your disagreements experimentally! There's no need for all this theorizing!" And my rejoinder would be: Resolving disagreements experimentally is great... when it's possible. We'd love to do a randomized controlled trial of whether universes with a Machine Intelligence Research Institute are more likely to have a positive singularity, but that unfortunately we don't currently know how to do that.
There are a few issues with too much emphasis of experimentation over theory. The first issue is that you may be tempted to prefer experimentation over theory even for problems that theory is better suited for (e.g. empirically testing prime number conjectures). The second issue is that you may fall prey to the streetlight effect and prioritize areas of investigation that look tractable from an experimental point of view, ignoring questions that are both very important and not very tractable experimentally.
You write:
Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems.
This would seem to depend on the specifics of the agent in question. This seems like a potentially interesting line of inquiry. My impression is that MIRI thinks most possible AGI architectures wouldn't meet its standards for safety, so given that their ideal architecture is so safety-constrained, they're focused on developing the safety stuff first before working on constructing thought models etc. This seems like a pretty reasonable approach for an organization with limited resources, if it is in fact MIRI's approach. But I could believe that value could be added by looking at lots of budding AGI architectures and trying to figure out how one might make them safer on the margin.
We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself.
Sure... but note that Eliezer Yudkowsky from MIRI was the one who invented the AI box experiment and ran the first few experiments, and FHI wrote this paper consisting of a bunch of ideas for what AI boxes consist of. (The other thing I didn't mention as a weakness of empiricism is that empiricism doesn't tell you what hypotheses might be useful to test. Knowing what hypotheses to test is especially nice to know when testing hypotheses is expensive.)
I could believe that there are fruitful lines of experimental inquiry that are neglected in the AI safety space. Overall it looks kinda like crypto to me in the sense that theoretical investigation seems more likely to pan out. But I'm supportive of people thinking hard about specific useful experiments that someone could run. (You could survey all the claims in Bostrom's Superintelligence and try to estimate what fraction could be cheaply tested experimentally. Remember that just because a claim can't be tested experimentally doesn't mean it's not an important claim worth thinking about...)
See my above point about how reasoning by analogy at its best is reasoning from a weak reference class. (Do people think this is worth a toplevel post?)
Yes, I do. Intuitively, this seems correct. But I'd still like to see you expound on the idea.
Has anyone here ever had the "location" of their sense of self change? I ask because I've recently read that while some people feel like "they" are located in their heads, others feel like "they" are in their chests, or even feet. Furthermore, apparently some people actually "shift around", in that sometimes they feel like their sense of self is in one body part, and then it's somewhere else.
I find this really interesting because I have never had such an experience myself; I'm always "in my head", so to speak--more precisely, I feel as though "I" am located specifically at a point slightly behind my eyes. The obvious hypothesis is that my visual sense is the sense that conveys the most information (aside from touch, which isn't pinned down to a specific location), which is why I identify with it most, but the sensation of being "in my head" persists even when I have my eyes closed, which somewhat contradicts that hypothesis. Also, the fact that some people apparently don't perceive themselves in that place is more weak evidence against that hypothesis.
So, any thoughts/stories/anecdotes?
If you practice mindfulness meditation, you'll realize that your sense of self is an illusion. It's probably true that most people believe that their "self" is located in their head, but if you investigate it yourself, you'll find that there's actually no "self" at all.
Hello, this is my first post on this website, I am currently sixteen. So to help me discover the concept of this website better, I would like someone to point me to recent posts considered as "important" by you (this is always purely objective, I think). Since you wrote you wish you had known about Less Wrong when you were 15/16, I think you were unconsciously talking about several particular things you've seen, and watching them could help me.
The core ideas in LW come from the Major Sequences. You can start there, reading posts in each sequence sequentially.
Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?
Sergey Levine's research on guided policy search (using techniques such as hidden markov models to animate, in real-time, the movement of a bipedal or quadripedal character). An example:
Sergey Levine, Jovan Popović. Physically Plausible Simulation for Character Animation. SCA 2012: http://www.eecs.berkeley.edu/~svlevine/papers/quasiphysical.pdf
How would you like this reading group to be different in future weeks?
The text of [the parts I've read so far of] Superintelligence is really insightful, but I'll quote Nick in saying that
"Many points in this book are probably wrong".
He gives many references (84 in Chapter 1 alone), some of which refer to papers and others that resemble continuations of the specific idea in question that don't fit in directly with the narrative in the book. My suggestion would be to go through each reference as it comes up in the book, analyze and discuss it, then continue. Maybe even forming little discussion groups around each reference in a section (if it's a paper). It could even happen right here in comment threads.
That way, we can get as close to Bostrom's original world of information as possible, maybe drawing different conclusions. I think that would be a more consilient understanding of the book.
It’s tempting to think of technical audiences and general audiences as completely different, but I think that no matter who you’re talking to, the principles of explaining things clearly are the same. The only real difference is which things you can assume they already know, and in that sense, the difference between physicists and the general public isn’t necessarily more significant than the difference between physicists and biologists, or biologists and geologists.
Reminds me of Expecting Short Inferential Distances.
View more: Next
I signed up for a CFAR workshop, and got a scholarship, but couldn't travel for financial reasons. Is there a way to get travel assistance for either WAISS or the MIRI Fellowship program? I'll just apply for both.