All of Jordan's Comments + Replies

Jordan00

Ah, I should have guessed that 'Immersion Learning' had been co-opted a few times before. My above use is my own coinage. By it I just mean jumping in and being exposed to everything you can and letting your brain sort it out, rather than methodically building a cathedral of understanding, one block at a time.

Jordan10

This is how I prefer to learn as well. I call it "Immersion Learning".

For example, during my first year of Algebra, I carried a Calculus textbook with me to class, and read whenever I was bored. I read through the whole textbook that semester, and understood maybe 20%. I didn't bother doing any problems, and when I tried I was totally incapable, but that was OK. The next semester I read through a Calc II and Calc III textbook. Afterward I decided I was going to take the AP Calculus exam. I bought a prep book and started doing calculus problems fo... (read more)

0AshwinV
A question: Is 'Immersion Learning' a term that you have coined? If not, does this have anything to do with Luis Von Ahn's immersion concept on duolingo?
Jordan00

I do this as well, but I don't "lie" (from the perspective of my core values).

I empathetically accept the other person's ethics and decisions. I allow that common connection to genuinely color my tone and physical expressions, which seems to build rapport just as well as actually verbalizing agreement. When I find myself about to verbalize agreement of something I don't actually believe, I consciously pull back. The trick is being able to pull back without losing your empathetic connection.

Anecdotally, I find that I can verbalize disagreement, bu... (read more)

Jordan00

Awesome, thanks for the detailed response. After reading about CRISPR's natural role in bacteria I was curious if it would have targeting limitations. It sounds like it does (needs GGG triplet), but that in practice this isn't a big deal.

You still need to get this system into a cell -- that's an issue as always, I agree -- but the reduced chance of unwanted mutation seems like a big step forward over retroviruses.

Thanks again for the great write up!

Jordan00

I'm very curious how many genes can be targeted usefully. One paper succeeded in targeting 5 simultaneously in a mouse model. Given the purported accuracy that is already game changing, but if we can do 100 or 200 then maybe we can do more than merely eliminate some simple single gene disorders.

2Douglas_Knight
100 is much smaller than genetic load.
Jordan40

Scary enough for ya?

Sufficiently scary, yes.

That is equivalent to saying you can't understand how mathematics could be a construct; or how mathematical anti-realism could possibly be true.

I assign a respectable probability to anti-realism, and hold no disrespect for anyone who is an anti-realist, but I don't understand how anti-realism can be true. I've never heard a plausible model for why one thing should exist but not another. Tegmarkism sweeps away that problem, leaving the new problem of how to measure probability (why do we have the subjective... (read more)

Jordan140

I had a similar experience the first time I supplemented magnesium. Long lasting, non-jittery energy spike. I felt stronger (and empirically could in fact lift more weight), felt better, and was extremely happy. The effect decreased the next few times. After 4 doses (of 50% RDA, spread out over 2 weeks) I began to have adverse effects, including heart palpitation, weakness, and "sense of impending doom".

I wonder if there is a general physiological response to a sudden swing in electrolyte balance that causes the positive effect, rather than the removal of a deficiency.

3A1987dM
Is there a typo/something I'm missing, or who the hell set the RDA that high?
Jordan10

If you wipe out the chemical gradient information then how do you know what sorts of ways that the dendrites should regrow in the weeks and months post-resuscitation?

If I wake up and I feel like myself on a second to second basis, I will not be upset if my path through mind space is drastically altered on a time scale of weeks and months, so long as it doesn't lead me to insanity. Hell, I hope I'll be able to drastically change my mind on that time scale anyway once I'm uploaded.

Jordan30

if a problem doesn't appear quickly, then it probably isn't that important...

I agree completely, especially about how close we probably are to a successful Biosphere, but just to throw out an example where this is wrong: vitamin B-12 deficiency usually takes a decade to demonstrate symptoms, and is fatal.

Jordan20

It is dangerous in the same way as bringing John Q. Snodgrass to trial for murder. We might overweight evidence in favor of the hypothesis.

Human intuition is a valuable heuristic. As a mathematician I constantly entertain hypotheses I don't believe to be true, for the simple reason that my intuition presented them to be considered. I don't believe I would be at all effective otherwise (although I did just now entertain the hypothesis, despite my lack of belief!)

Jordan00

firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't.

I agree, but certainly trying to solve the problem without any hands on knowledge is more difficulty.

Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it

I agree, there is a risk that the first AGI we build will be intelligent enough to skillfully manipulate us. I think the chances are quite small. I find it difficult to image skipping dog level intelligence a... (read more)

Jordan60

I agree with Allen and Wallach here. We don't know what an AGI is going to look like. Maybe the idea of a utility maximizer is unfeasible, and the AGIs we are capable of building end up operating in a fundamentally different way (more like a human brain, perhaps). Maybe morality compatible with our own desires can only exist in a fuzzy form at a very high level of abstraction, effectively precluding mathematically precise statements about its behavior (like in a human brain).

These possibilities don't seem trivial to me, and would undermine results from fri... (read more)

3Nymogenous
The problem there is twofold; firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't. Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it; if we make one too smart it may be smart/nasty enough to feed us misleading data so that our final AI will not share moral values with humans. It's what I'd do if some aliens tried to dissect my mind to force their morality on humanity.
Jordan20

Very interesting. It appears my own model of the brain included a false dichotomy.

If modules are not genetically hardwired, but rather develop as they adapt to specific stimuli, then we should expect infants to have more homogeneous brains. Is that the case?

0Kaj_Sotala
I would presume so, but I haven't read any research on the question.
Jordan20

When I read it I was imagining something tongue in cheeky like Pirates of Penzance. Dr. Seuss would have the advantage of great illustrations though.

3dlthomas
It does seem to fit the Major General's Song.
Raemon150

I don't see it as a play, so much as a lengthy Dr. Seuss book.

Jordan20

I'm aware that we've caculated 'c' both by directly measuring the speed of light (to high precision), as well as indirectly via various formulas from relativity (we've directly measured time dilation, for instance, which lets you estimate c), but are the indirect measurements really accurate to parts per million?

Manfred110

Fortunately for me, wikipedia turned out to provide good citations. In 2007 some clever people managed to measure the c in time dilation to a precision of about one part in 10^-8.

Jordan80

If everywhere in physics where we say "the speed of light" we instead say "the cosmic speed limit", and from this experiment we determine that the cosmic speed limit is slightly higher than the speed of light, does that really change physics all that much?

5Manfred
We have measured both to higher accuracies than the deviation here. One way to measure the "cosmic speed limit" is by measuring how things like energy transform when you approach that speed limit, for example, which happens in particle accelerators all day every day.
0jhuffman
Then what would be constraining the travel speed of light in a vacuum?
Jordan50

I was disappointed when I first looked into the C. elegans emulation progress. Now I'm not so sure it's a bad sign. It seems to me that at only 302 neurons the nervous system is probably far from the dominant system of the organism. Even with a perfect emulation of the neurons, it's not clear to me if the resulting model would be meaningful in any way. You would need to model the whole organism, and that seems very hard.

Contrast that with a mammal, where the brain is sophisticated enough to do things independently of feedback from the body, and where we ca... (read more)

1jefftk
"You would need to model the whole organism, and that seems very hard." There are only ~100 muscle cells. People are trying to model the the brain-body combination, but that doesn't sound unreasonably hard to me.
5Douglas_Knight
The lobster stomach ganglion, 30 neurons, but a ton of synapses, might be better for since its input and output are probably cleaner.
Jordan90

(This is exacerbated by the fact that when I'm sleep-deprived, I tend to feel lousy and wanting to doze off through the day, but then in the evening I suddenly start feeling perfectly OK and not wanting to sleep at all.)

I suffer from this as well. It is my totally unsubstantiated theory that this is a stress response. Throughout the whole day your body is tired and telling you to go to sleep, but the Conscious High Command keeps pressing the KEEP-GOING-NO-MATTER-WHAT button until your body decides it must be in a war zone and kicks in with cortisol or adrenaline or whatever.

Jordan30

Hear, hear. I encourage everyone to buddy up with an academic and use that academic's library's access to journals.

Jordan20

I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.

This. I'm flabbergasted this isn't pursued further.

Jordan10

Definitely works better than any supplement or herbal remedy I've tried, but I usually don't feel rested the next day.

Jordan10

Fully agree, especially because I suffer from chronic insomnia =D

0MatthewBaker
Have you tried vaporizing medical sleepy weed? That helped a lot with my insomnia :)
Jordan00

All i'm saying is that people attribute evolutionary reasons to things that have many separate causes and are unproven because they think they understand it.

I agree, however, reverse stupidity is not intelligence. You say

the behavioral pattern that polyphasic sleep requires isn't evolved into our system its just a natural response to the natural light patterns of our world

but this seems like an unsubstantiated claim, just as much as people claiming sleep must be an evolved behavior. I agree that sleep is at least partially behavioral, but it's uncle... (read more)

0MatthewBaker
Your right, I was being too rhetorical when i said that. The final point of my piece was simply a way of explaining the quote, i can agree that
Jordan30

I would discount polyphasic sleep as being natural on grounds of my current knowledge of anthropology. As far as I know there are no known human cultures that engage in polyphasic sleep (not counting biphasic sleep). That seems like pretty strong evidence that it isn't behavioral, it's physiological, which in turn suggests (but doesn't guarantee) an evolutionary basis for human sleep patterns. Of course, some amount of human sleep patterns is behavioral, e.g. the siesta.

0Jack
Can anyone confirm that chimps and bonobos are diurnal as well?
4MatthewBaker
Look, our sun forces us into a monophasic pattern because of the day/night cycle that occurs everywhere around the earth but our body's don't naturally fall into it. We sleep at night because our brain is wired to sleep when its dark and that's an evolved mechanism but the behavioral pattern that polyphasic sleep requires isn't evolved into our system its just a natural response to the natural light patterns of our world . In parts of the world where light comes less often sleeping patterns are different than near the equator as evidenced by biphasic sleepers around the world who follow the Siesta pattern naturally. All i'm saying is that people attribute evolutionary reasons to things that have many separate causes and are unproven because they think they understand it. That's what both these quotes illustrate as far as i know :)
Jordan50

Great post, great review of the literature.

Where do you get most of your references? Do you wade through the literature, or do you use review papers? I'd love to see a book length compilation with the same density as this post.

1Scott Alexander
With one or two exceptions, these were all taken from the link "Verbal Reports on Mental Processes" at the beginning of the post.
Jordan-10

you had better start imagining pretty hard and consider every possible unexpected event of that order of improbability, including black swans

With QS you must guard yourself against all local Everett branches. Those branches could conceivably contain black swans, like a few electrons tunneling out of a circuit preventing a CPU from performing correctly. Even that is a 1:1,000,000,000 or more event. But they will not contain something macroscopic.

If I look around and notice no one nearby, I might say "I am only 99% confident that there isn't anyone n... (read more)

2wedrifid
I agree (for the same reasons you specified.) It becomes even more complicated when trying to account for a probability distribution over possible quantum configurations that could lead to your own subjective state. Because culling from the futures of some possible current states makes the other possible 'now's considered to be more relevant there are additional failure modes that become even more likely to be relevant.
Jordan20

If I were to build a death machine it would be based on high explosives. I would encase my head in a mound of C4 clay (or perhaps a less stable material). The machine could fail, most likely at the detonator, but it's difficult to imagine how it could maim me.

wedrifid140

but it's difficult to imagine

I believe that is the point they are trying to illustrate. If you are trying to QS your way to one in several million events then you had better start imagining pretty hard and consider every possible unexpected event of that order of improbability, including black swans. You can influence the relative probability of various failure modes but you must acknowledge that the failure modes become magnified alongside the win outcome.

This consideration is probably not a problem when playing a simple low-n roulette. It becomes insu... (read more)

2Alexei
I agree. I think even with our modern technology we can create a suicide machine that will have a very very high chance of working.
Jordan50

It's difficult for my brain to parse a sentence with 'alieve'. I guess I've watched too many commercials, and my brain associates 'Aleve' with 'relieve', which has an approximately opposite meaning. I have to mentally substitute 'alieve' with something like 'actually believe' in order to comfortably read the sentence.

Jordan110

The borders on comments are fairly ugly, and far too thick. When I go to view all my comments, the way they are listed there is much more aesthetically pleasing.

I like the new header. The footer is a great improvement.

Mixed feelings about the thumbs up/down icons. I like icons, and they are smaller than the text "Vote up" and "Vote down", but they actually end up taking more space than the text, because their vertical height is greater. Perhaps they can be shrunk a bit and placed in the title line of the comment, along with the permalink and reply icon? You could potentially hide all the icons unless you're mousing over the comment, to avoid clutter.

3satt
I refreshed this page, and only the new comments had the thick, lime green border, so it seems to be a way to show which comments got posted since one last viewed the page. Anyway, it looks a bit off to me too; maybe it'd work almost as well if it had the same thickness as the normal, thinner border? The lime green colour would stick out less if the border weren't as thick. I think this is worth a shot. (Also open to Alicorn's suggestion that the icons go back to being words, in which case they should probably stay at the bottom of comments.) Count me as a vote against that idea.
Jordan30

I think an unforeseeable edge case or bug that requires deep refactoring and severely cuts into allotted development time fits the bill for a black swan dead on.

Jordan00

What if the subreddit was an actual reddit subreddit?

Jordan90

That is the heart of the social engineering problem at hand.

Programmers gain status by creating and contributing to open source projects, and by answering questions on StackOverflow, etc. I think that is a stable equilibrium, both for programmers and for academics. The question is how to get to that equilibrium in the first place.

First, I think it needs to become generally accepted that the current equilibrium is broken and that there are alternatives. To that end I encourage all academics to discuss it as openly as possible. Once that happens I think (hope) it will just be a matter of high status individuals throwing their weight around properly.

0DSimon
Seconding the recommendation of following the Open Source model, particularly Stack Overflow. I'm also a big fan of the many OSS-focused IRC channels, where you'll typically be able to find grouchy-but-helpful people to advise you on the fine points of nearly any nearly piece of software.
2Cayenne
An 'open-source science' original-research version of Wikipedia, perhaps? With everything explicitly licensed under an attribution-required copyright? Edit - please disregard this post
Jordan30

Can you comment on what the end goal is for all your scholarship, aside from satisfaction?

lukeprog140

Right now, solving the 'first stage' of metaethics and then making novel progress on CEV. That's where almost every heavily-cited post I've written on LW has been heading.

Jordan220

I lament this state of affairs with the subdued passion of a 1000 brown dwarf suns.

It's ridiculous that wikipedia is more structured and useful that most of the academic literature. I would like to start some kind of academic movement, whereby we reject closed journals, embrace the open source mentality, and collaborate on up-to-date and awesome wikis on every modern research area.

-1timtyler
That sounds rather like Scholarpedia's plan: http://www.scholarpedia.org/
4David_Gerard
I understand that this is sort of what happens in physics - arXiv preprints (where anything good is expected to be developed into a peer-review-worthy journal article) and a specialist blogosphere. The exchange of prestige and hence the academic credit economy seems to still happen. I suspect the key factor here is arXiv being open-access. So a possible first step is to set up a preprint archive for that field and get the researchers blogging.
wedrifid200

I would like to start some kind of academic movement, whereby we reject closed journals, embrace the open source mentality, and collaborate on up-to-date and awesome wikis on every modern research area.

Ok, your next task is to figure out a way to make academics gain status by participation in that plan. :)

Jordan20

You're right, the guideline is not too well worded. You should probably replace "what you wouldn't eat raw" with "what would be toxic to eat raw".

Meat is edible raw. There's nothing inherently toxic about uncooked meat. Many other foods require cooking to diminish their toxicity (potatoes, grains, legumes). There's definitely concern about parasites in raw meat, but parasites are not an inherent quality of the meat itself.

There's actually a whole raw paleo sub-subculture. I wouldn't recommend it personally, and I'm not keen to try it myself, but it's there.

Jordan40

I think it's likely humans are evolved to eat cooked food. The guideline don't eat anything you wouldn't eat raw isn't intended to dissuade people to not eat cooked food, but rather to serve as a heuristic for foods that were probably less commonly eaten by our ancestors. It's unclear to me how accurate the heuristic is. A big counterexample is tubers. Tubers are widely eaten by modern hunter-gatherers and are toxic when uncooked.

Jordan50

There isn't really a rigorous definition of the diet. One guideline some people use is that you shouldn't eat anything you wouldn't eat raw, which excludes beans. Coffee beans aren't actually beans though. I wouldn't be surprised if some people consider coffee not paleo, but there are big names in the paleo scene that drink coffee (Kurt Harris, Art de Vany).

Really, I would say paleo is more a philosophy for how to go about honing in on a diet, rather than a particular diet in and of itself. There are hard lines, like chocolate muffins. I don't think coffee is close to that line though.

1[anonymous]
That surprises me. The paleo diet I know includes meat, which you should cook in order to kill parasites.
-1Peterdjones
There's also a theory that the development of cooking was responsible for the evolutionary Great Leap Forward.
Jordan60

Fluid dynamics. Considering jumping over to computational neuroscience.

I've put some serious thought into a paleo coffee shop. It's definitely on my list of potential extra-academic endeavors if I end up leaving my ivory tower.

Jordan20

I should have done some more due diligence before suggesting my idea:

http://www.cs.ucla.edu/~sblee/Papers/mobicom09-wnoc.pdf

Edit: I was originally concerned about bandwidth, but the above article claims

On-chip wireless channel capacity. Because of such low signal loss over on-chip wireless channels and new techniques in generating terahertz signals on-chip [14,31], the on-chip wireless network becomes feasible. In addition, it is possible to switch a CMOS transistor as fast as 500 GHz at 32 nm CMOS [21], thus allowing us to implement a large number of hi

... (read more)
Jordan150

Luckily a juicy porterhouse steak is a nice stand-in for a triple chocolate muffin. Unfortunately they don't tend to sell them at coffee shops.

Perhaps I'll end my career as a mathematician to start a paleo coffee shop.

1Alicorn
Is coffee in the paleo diet?

I fully expect that less than 0.1% of mathematicians are working on math anywhere near as important as starting a chain of paleo coffee shops. What are you working on?

9NancyLebovitz
A fast search suggests that there aren't any paleo restaurants, and possibly not even paleo sections on menus, so there might just be a business opportunity.
Jordan10

Field: Electrical Engineering. No idea how practical this is though:

An important problem with increasing the number of cores on a chip is having enough bandwidth between the cores. Some people are working on in-silicone optical channels, which seems promising. Instead of this would it be possible for the different cores to communicate with each other wirelessly? This requires integrated transmitter and receivers, but I believe both exist.

2twanvl
I am not an electrical engineer, but as far as I know, wireless communication requires a relatively large antenna. Also, the bandwidth is likely a lot worse than that of a wire. There is a good reason that people still use wires whenever possible.
Jordan100

This is a great list. I think it's too easy to focus on will power alone.

I've been training myself for years to be able to work longer hours. I've built up to the point where I can work for 12-16 hours straight, everyday. Unfortunately I'm only now realizing the extent of other costs. During weeks or months when I'm working hard, I have begun to notice many things:

  • I find it much harder to remain completely calm and respectful while interacting with people close to me. (I used to pride myself on my levelheadedness in interpersonal relationships)
  • I find my
... (read more)
1A1987dM
I've noticed that too: I'm much less agreeable if I'm sleep-deprived or if I've been studying hard for a while, even when I'm not actually feeling tired.
1handoflixue
Do you feel like your productivity has gone up significantly thanks to these extra hours? Do you have any objective, external metrics that might confirm that?
Jordan00

Sounds like something that could be useful for rationality boot camp.

I'd love to do some rejection therapy. There might need to be some caution in applying it in a group setting though. I know for me it would be much easier (and hence much less useful) to do things like asking for a discount if there is a social group behind me to back me up (even if they are out of sight).

5handoflixue
Anecdotally, I've found that having a social group behind you is actually really helpful. My church actually had us go to a sex shop and purchase safe sex supplies, so that we'd learn to overcome our social anxiety around it. Given I was asexual and probably around 14, it was a pretty embarrassing thing to do. Knowing that all of my youth group peers would make fun of me for failing outweighed that social anxiety, though. Going through that really did seem to help disarm a lot of the anxiety, if only by having an actual positive interaction I could point to and say "See? Nothing bad happened!"
Jordan00

You're getting into dangerous philosophical territory here, which is not at all easy to resolve. If there are two animals with very similar brain states are they distinct animals? If not, have we doubled the subjective chance of an animal experiencing the state of the doubled animal? These aren't straightforward questions at all. See the Anthropic Trilemma.

I'm not sure how anyone could argue that bringing more suffering animals into the world is good. I support humane treatment of livestock, which I think makes for a net positive regardless of how the Anth... (read more)

Jordan90

This is a more reasonable and measured reply. Negative comments are great, so long as they have substance.

2MarkusRamikin
Positive ones don't have to have substance? ;)
Jordan180

You could probably find other philosophers to help out. The end result, if supported properly by Eliezer, could be very helpful to SIAI's cause.

If SIAI donations could be earmarked for this purpose I would double my monthly contribution.

Jordan00

I think that's just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I'm fairly certain that Eliezer's presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).

Jordan20

Upvoted, although my understanding is that there is no difference between Eliezer's MWI and canonical MWI as originally presented by Everett. Am I mistaken?

0Cyan
Since I'm not familiar with Everett's original presentation, I don't know if you're mistaken. Certainly popular accounts of MWI do seem to talk about "worlds" as something extra on top of QM.
Load More