Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Duncan_Sabien 05 December 2016 07:20:02PM 0 points [-]

A general strategy of "can I completely reverse my current claim and have it still make sense?" is a good one for this. When you're talking about big, vague concepts, you can usually just flip them over and they still sound like reasonable opinions/positions to take. When you flip it and it seems like nonsense, or seems provably, specifically wrong, that means you're into concrete territory. Try just ... adopting a strategy of doing this 3-5 times per long conversation?

Comment author: negamuhia 10 December 2016 10:20:19PM 0 points [-]

This seems useful and simple enough to try. I'll set up an implementation intention to do this next time I find myself in a long conversation. It also reminds me of the reversal test, a heuristic for eliminating status-quo bias.

Bostrom, Ord (2006)

Comment author: negamuhia 03 December 2016 02:12:32PM 2 points [-]

Does anyone else, other than me, have a problem with noticing when the discussion they're having is getting more abstract? I'm often reminded of this fact when debating some topic. This is relating to the point on "Narrowing the scope", and how to notice the need to do this.

Comment author: negamuhia 11 April 2016 02:33:39PM 0 points [-]

I signed up for a CFAR workshop, and got a scholarship, but couldn't travel for financial reasons. Is there a way to get travel assistance for either WAISS or the MIRI Fellowship program? I'll just apply for both.

Comment author: negamuhia 11 April 2016 02:31:10PM 1 point [-]

What reaches your attention when you see is not ‘reality’ but a mix of light measurements with cryptotheories that were useful for making snap judgments in the environment of ancestral adaptation.

Eric S. Raymond here: http://esr.ibiblio.org/?p=7076

Meetup : Rationality Nairobi mini-Meetup #1: Double Crux

0 negamuhia 05 April 2016 12:40PM

Discussion article for the meetup : Rationality Nairobi mini-Meetup #1: Double Crux

WHEN: 30 April 2016 03:34:07PM (+0300)

WHERE: Up the hill from Lukenya Academy, Machakos

We'll be learning and investigating the dynamics of the game 'Double Crux', a potentially useful tool for approximating Aumannian reasoning. We'll play a few rounds and I (lesswrong.com/user/negamuhia) will describe what happened, with my impressions of difficulty, interest and progression based on the members who participate.

Discussion article for the meetup : Rationality Nairobi mini-Meetup #1: Double Crux

Comment author: Sherincall 12 August 2015 10:48:29AM 7 points [-]

A botnet startup. People sign up for the service, and install an open source program on their computer. The program can:

  • Use their CPU cycles to perform arbitrary calculations.
  • Use their network bandwidth to relay arbitrary data.
  • Let the user add restrictions on when/how much it can do the above.

For every quantum of data transferred / calculated, the user earns a token. These tokens can then be used to buy bandwidth/cycles of other users on the network. You can also buy tokens for real money (including crypto-currency).

Any job that you choose to execute on the other users machines has to be somehow verified safe for those users (maybe the users have to be able to see the source before accepting, maybe the company has to authorize it, etc). The company also offers a package of common tasks you can use, such as DDoS, Tor/VPN relays, seedboxes, cryptocurrency mining and bruteforcing hashes/encryption/etc.

Comment author: negamuhia 15 August 2015 01:04:49PM 1 point [-]

Ethereum does this

Comment author: John_Maxwell_IV 22 May 2015 04:54:38AM *  38 points [-]

Thanks for sharing your contrarian views, both with this post and with your previous posts. Part of me is disappointed that you didn't write more... it feels like you have several posts' worth of objections to Less Wrong here, and at times you are just vaguely gesturing towards a larger body of objections you have towards some popular LW position. I wouldn't mind seeing those objections fleshed out in to long, well-researched posts. Of course you aren't obliged to put in the time & effort to write more posts, but it might be worth your time to fix specific flaws you see in the LW community given that it consists of many smart people interested in maximizing their positive impact on the far future.

I'll preface this by stating some points of general agreement:

  • I haven't bothered to read the quantum physics sequence (I figure if I want to take the time to learn that topic, I'll learn from someone who researches it full-time).

  • I'm annoyed by the fact that the sequences in practice seem to constitute a relatively static document that doesn't get updated in response to critiques people have written up. I think it's worth reading them with a grain of salt for that reason. (I'm also annoyed by the fact that they are extremely wordy and mostly without citation. Given the choice of getting LWers to either read the sequences or read Thinking Fast and Slow, I would prefer they read the latter; it's a fantastic book, and thoroughly backed up by citations. No intellectually serious person should go without reading it IMO, and it's definitely a better return on time. Caveat: I personally haven't read the sequences through and through, although I've read lots of individual posts, some of which were quite insightful. Also, there is surprisingly little overlap between the two works and it's likely worthwhile to read both.)

And here are some points of disagreement :P

You talk about how Less Wrong encourages the mistake of reasoning by analogy. I searched for "site:lesswrong.com reasoning by analogy" on Google and came up with these 4 posts: 1, 2, 3, 4. Posts 1, 2, and 4 argue against reasoning by analogy, while post 3 claims the situation is a bit more nuanced. In this comment here, I argue that reasoning by analogy is a bit like taking the outside view: analogous phenomena can be considered part of the same (weak) reference class. So...

  • Insofar as there is an explicit "LW consensus" about whether reasoning by analogy is a good idea, it seems like you've diagnosed it incorrectly (although maybe there are implicit cultural norms that go against professed best practices).

  • It seems useful to know the answer to questions like "how valuable are analogies", and the discussions I linked to above seem like discussions that might help you answer that question. These discussions are on LW.

  • Finally, it seems you've been unable to escape a certain amount of reasoning by analogy in your post. You state that experimental investigation of asteroid impacts was useful, so by analogy, experimental investigation of AI risks should be useful.

The steelman of this argument would be something like "experimentally, we find that investigators who take experimental approaches tend to do better than those who take theoretical approaches". But first, this isn't obviously true... mathematicians, for instance, have found theoretical approaches to be more powerful. (I'd guess that the developer of Bitcoin took a theoretical rather than an empirical approach to creating a secure cryptocurrency.) And second, I'd say that even this argument is analogy-like in its structure, since the reference class of "people investigating things" seems sufficiently weak to start pushing in to analogy territory. See my above point about how reasoning by analogy at its best is reasoning from a weak reference class. (Do people think this is worth a toplevel post?)

This brings me to what I think is my most fundamental point of disagreement with you. Viewed from a distance, your argument goes something like "Philosophy is a waste of time! Resolve your disagreements experimentally! There's no need for all this theorizing!" And my rejoinder would be: Resolving disagreements experimentally is great... when it's possible. We'd love to do a randomized controlled trial of whether universes with a Machine Intelligence Research Institute are more likely to have a positive singularity, but that unfortunately we don't currently know how to do that.

There are a few issues with too much emphasis of experimentation over theory. The first issue is that you may be tempted to prefer experimentation over theory even for problems that theory is better suited for (e.g. empirically testing prime number conjectures). The second issue is that you may fall prey to the streetlight effect and prioritize areas of investigation that look tractable from an experimental point of view, ignoring questions that are both very important and not very tractable experimentally.

You write:

Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems.

This would seem to depend on the specifics of the agent in question. This seems like a potentially interesting line of inquiry. My impression is that MIRI thinks most possible AGI architectures wouldn't meet its standards for safety, so given that their ideal architecture is so safety-constrained, they're focused on developing the safety stuff first before working on constructing thought models etc. This seems like a pretty reasonable approach for an organization with limited resources, if it is in fact MIRI's approach. But I could believe that value could be added by looking at lots of budding AGI architectures and trying to figure out how one might make them safer on the margin.

We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself.

Sure... but note that Eliezer Yudkowsky from MIRI was the one who invented the AI box experiment and ran the first few experiments, and FHI wrote this paper consisting of a bunch of ideas for what AI boxes consist of. (The other thing I didn't mention as a weakness of empiricism is that empiricism doesn't tell you what hypotheses might be useful to test. Knowing what hypotheses to test is especially nice to know when testing hypotheses is expensive.)

I could believe that there are fruitful lines of experimental inquiry that are neglected in the AI safety space. Overall it looks kinda like crypto to me in the sense that theoretical investigation seems more likely to pan out. But I'm supportive of people thinking hard about specific useful experiments that someone could run. (You could survey all the claims in Bostrom's Superintelligence and try to estimate what fraction could be cheaply tested experimentally. Remember that just because a claim can't be tested experimentally doesn't mean it's not an important claim worth thinking about...)

Comment author: negamuhia 14 August 2015 11:55:54AM *  0 points [-]

See my above point about how reasoning by analogy at its best is reasoning from a weak reference class. (Do people think this is worth a toplevel post?)

Yes, I do. Intuitively, this seems correct. But I'd still like to see you expound on the idea.

Comment author: dxu 20 April 2015 12:19:27AM *  15 points [-]

Has anyone here ever had the "location" of their sense of self change? I ask because I've recently read that while some people feel like "they" are located in their heads, others feel like "they" are in their chests, or even feet. Furthermore, apparently some people actually "shift around", in that sometimes they feel like their sense of self is in one body part, and then it's somewhere else.

I find this really interesting because I have never had such an experience myself; I'm always "in my head", so to speak--more precisely, I feel as though "I" am located specifically at a point slightly behind my eyes. The obvious hypothesis is that my visual sense is the sense that conveys the most information (aside from touch, which isn't pinned down to a specific location), which is why I identify with it most, but the sensation of being "in my head" persists even when I have my eyes closed, which somewhat contradicts that hypothesis. Also, the fact that some people apparently don't perceive themselves in that place is more weak evidence against that hypothesis.

So, any thoughts/stories/anecdotes?

Comment author: negamuhia 24 April 2015 11:56:46AM -1 points [-]

If you practice mindfulness meditation, you'll realize that your sense of self is an illusion. It's probably true that most people believe that their "self" is located in their head, but if you investigate it yourself, you'll find that there's actually no "self" at all.

Comment author: Chaeris 14 February 2015 04:54:30PM 2 points [-]

Hello, this is my first post on this website, I am currently sixteen. So to help me discover the concept of this website better, I would like someone to point me to recent posts considered as "important" by you (this is always purely objective, I think). Since you wrote you wish you had known about Less Wrong when you were 15/16, I think you were unconsciously talking about several particular things you've seen, and watching them could help me.

Comment author: negamuhia 15 February 2015 09:54:30PM 0 points [-]

The core ideas in LW come from the Major Sequences. You can start there, reading posts in each sequence sequentially.

Comment author: KatjaGrace 16 September 2014 01:21:05AM 3 points [-]

Have you seen any demonstrations of AI which made a big impact on your expectations, or were particularly impressive?

Comment author: negamuhia 16 September 2014 12:28:28PM 2 points [-]

Sergey Levine's research on guided policy search (using techniques such as hidden markov models to animate, in real-time, the movement of a bipedal or quadripedal character). An example:

Sergey Levine, Jovan Popović. Physically Plausible Simulation for Character Animation. SCA 2012: http://www.eecs.berkeley.edu/~svlevine/papers/quasiphysical.pdf

View more: Next