All of latanius's Comments + Replies

... I did my fair share too, Santa vs. thin threads spun across the way between where the presents were supposed to emerge and the door... "stand back, I'm going to try Science" for the first time I remember.

Actually, it was a really nice experience not only about Science but also about how compartmentalization feels from the inside. I definitely remember thinking both that it's my parents and that it's some kind of mystical thingy, the only new thing that year was that these two aren't supposed to coexist in the same world. Not surprisingly, it's the very same feeling that I felt after being exposed to a semester of catholic middle school. Didn't have a name for it then though...

latanius190

Martial arts training camp. Average sleep time was around 4 hours per day, also, guard shifts round the day, so sometimes it ended up being 2. So towards the end of the week I was quite... sleepy. And this seems to have an interesting effect on visual pattern recognition.

One day, me and another guy were standing guard, around 4 in the morning, the sun was just about to come up. Making circles around the countryside weekend house we were staying in, I noticed that some people appeared with a truck and started to pick grapes from the nearby field. I promptly... (read more)

k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.

(And even though the output is a bit bigger than the originals, I didn't encounter any problems with 600 page books... the result was about 50 megs tops.)

Possibly relevant: Sudbury schools, with the curriculum of "do whatever you want, as long as you're in school, surrounded by interesting stuff". Also, http://www.psychologytoday.com/blog/freedom-learn. It really seems that we are doing quite bad by default...

As it turns out, for example, kids are quite good at learning stuff from each other (including things like reading... "I can't always get the big kids to read me stories, so I'd better go and learn this <> thing from them"...)

Now, find a way to prevent that from happening. Sorting kids by age and separating the groups? Perfect.

1NancyLebovitz
A little dubiousness about Sudbury-- basically a claim that the democracy aspect means that you need to be good at small group politics.
Baughn160

Your second point is very important.

It's not just about lost opportunities to learn.. well, it mostly is, but consider socialisation. Kids are supposed to learn how to act in society from.. who, exactly? Obviously not the teachers, they aren't around enough of the time to do it, and socialisation isn't really in the curriculum anyway. So. Parents, typically, and hope you have good ones.

Historically, you'd learn from older siblings or friends, but families are smaller and age-sorting has almost entirely eliminated cross-age friendships. That's bad; eliminating that problem might be a good start towards fixing schools.

latanius110

Your "running different code" approach is nice... especially paired up with the notion of "how the algorithm feels from the inside", seems to explain lots of things. You can read books about what that code does, but the best you can get is some low quality software emulation... meanwhile, if you're running it, you don't even pay attention to that stuff as this is what you are.

Aren't utility functions kind of... invariant to scaling and addition of a constant value?

That is, you can say that "I would like A more than B" but not "having A makes me happier than you would be having it". Neither "I'm neither happy or unhappy, so me not existing wouldn't change anything". It's just not defined.

Actually, the only place different people's utility functions can be added up is in a single person's mind, that is, "I value seeing X and Y both feeling well twice as much as just X being in such a state"... (read more)

1Nornagest
It's hard to do utilitarian ethics without commensurate utility functions, and so utilitarian ethical calculations, in the comparatively rare cases where they're implemented with actual numbers, often use a notion of cardinal utility. (The Wikipedia article's kind of a mess, unfortunately.) As far as I can tell this has nothing to do with cardinal numbers in mathematics, but it does provide for commensurate utility scales; in this case, you'd probably be mapping preference orderings over possible world-states onto the reals in some way. There do seem to be some interesting things you could do with pure preference orderings, analogous to decision criteria for ranked-choice voting in politics. As far as I know, though, they haven't received much attention in the ethics world.

You won. Aren't rationalists supposed to be doing that?

As far as you know, your probability estimate for "you will win the lottery" (in your mind) was wrong. It is another question how that updates the probability of "you would win the lottery if you played next week", but whatever made you buy that ticket (even though the "rational" estimates voted against it... "trying random things", whatever it was) should be applied more in the future.

Of course, the result is quite likely to be "learning lots of nonsense fr... (read more)

... this is the thing I've been looking for! (I think I had some strange cached thought from who knows where that posts do not have comments feeds, so I didn't even check... thanks for the update!)

Didn't they do the same with set theory? You can derive a contradiction from the existence of "the set of sets that don't contain themselves"... therefore, build a system where you just can't do that.

(of course, coming from the axioms, it's more like "it wasn't ever allowed", like in Kindly's comment, but the "new and updated" axioms were invented specifically so that wouldn't happen.)

Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.

... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)

3Oscar_Cunningham
I don't quite see what you mean here. Do you know that each post has its own comments RSS feed?

P(Luminosity fan | reads this comment) is probably not a good estimate... (count me in with a "no" data point though :)) Also, what is the ratio of "Luminosity fan because of Twilight" and "read it even though... Twilight, and liked it" populations?

(with "read Twilight because of Luminosity" also a valid case.)

Also, who is the target audience and what are the plans for reaching it? I don't think there are many people who are willing to invest time AND money into a book like this while still not having read the sequences (available freely on the web, and also in all kinds of e-book formats).

For the two use cases I imagine at the moment:

  • giving it as a gift as an introduction to rationalist stuff feels better with a physical book indeed. Yes, there is a difference between buying an e-book for yourself and downloading the same stuff for free, especially in terms o

... (read more)
0[anonymous]
Reading them on the web is difficult because of organizational issues. The medium can be an issue too (I generally avoid reading long texts on my computer because of eye issues; I buy hard copies/Kindle editions or print instead). More information on the target audience would be good, though.

The point is that it's not, but making it so is a design goal of the paper.

Example: Mario immediately jumping into a pit at level 2. According to the learned utility function of the system, it's a good idea. According to ours, it's not.

Just as with optimizing smiling faces. But while that one was purely a thought experiment, this paper presents a practical, experimentally testable benchmark for utility function learning, and, by the way, shows a not-yet-perfect but working solution for it. (After all, Mario's Flying Goomba Kick of High Munchkinry definitely satisfies our utility functions.)

I heard the opposite too: don't try to push your own research too hard, especially in the beginning, but try to find something the others in the lab group are working on, learn stuff from them, and after a while you'll end up with your own ideas anyway.

Pros and cons for both of the approaches exist, but "picking a thesis early on" might be hard as you don't necessarily know what the good problems are in your field. But that might depend on your field / advisor too.

3elharo
Perhaps. All I can say is I was told that too; I tried that; and it really, really didn't work out. I think I might have been more successful had I focused on my own interests more. Certainly looking at my career in my adult life, almost all my biggest successes, with maybe one exception, were when I chose what to work on instead of agreeing to work on someone else's idea. Of course you do need to adjust this for your field. If you're working in pure math or Roman history, it's not all that hard to do your own thing. In experimental high energy physics, maybe not so much. If you need a million dollar laboratory to get started in a field, then you may not have a lot of choice in what you work on. Though even in the experimental field I'm most familiar with, observational astronomy, it still appears to me as if the most successful people did their own thing. It probably does matter than in astronomy, it's standard practice to allot telescope time based on proposals rather than ownership. Also, of course, if you can decide early on your area and aim at the program that does that well, then that's the best of both worlds. If you know you want to work on high temperature superconductivity, you're better off in a department that does a lot of work on solid state physics and better yet superconductivity specifically rather than one that specializes in string theory or experimental high energy physics.
2Kindly
In any case, however, I think it's pretty important to start doing some kind of research as early as possible. My own experience is with math grad school, where it's common to just focus on taking classes for the first year or two; but it's better to also be doing research during that time if you can.

You see that you won't get stuck: that you'll finish relatively fast.

Do you know of a way for estimating this? Every research problem (and the entire PhD itself) might look much easier before you start working on it (you don't have an outside view perspective before starting).

2JoshuaFox
Nothing is guaranteed, but a good adviser and good personal focus should allow you to finish it off. Again, no guaranteed answers, but if a given problem is too big, you can redefine your thesis to a narrower part of the problem.
latanius110

This thing looks more and more relevant as I think about it. What it does is not just optimizing an objective function in a weird and unexpected way, but actually learning it in all its complicatedness from observed human behavior.

Would it be an overestimation to call this a FAI research paper?

1Baughn
AI research paper? Maybe not. What's friendly about this AI?
latanius140

Wow. First thought: who is this guy who submits a really cool scientific result to a thing like SIGBOVIK? He could have sent this thing to a real conference! It's a thing no one has ever tried!

Then I checked out his website. The academic one. And the others.

Well, short description: "Superhero of Productivity". The list of stuff he created doesn't fit on his site. Sites. Also, see this remark of his,

One of the best things about grad school was that if you get your work done then you get to do other stuff too.

(I'm also at CS grad school, am happy if I have time to sleep, and my only productive output is... LW comments... does that count?)

Also, Tom's academic website. It's the coolest academic website I've ever seen.

In the class I TA for, the students can go to the professor's office hours after the midterm / final, and if they can solve the problem there, they still get... half of the points? I wonder how that one affects test-taking performance.

Also, this whole thing seems to be annoyingly resistant to Bayesian updates... "Every time I'm anxious I perform bad, and now I'm worried about being too worried for this exam", and, since performing bad is a very valid prediction in this state of mind, worry is there to stay.

Maybe if the tests are called "quizzes" the students end up in the other stable state of "not being worried"?

1jooyous
I feel like it's the students' responsibility to calibrate their own personal correct amount of worry that it takes to make them study, regardless of what the thing is called? (Like if I say "This quiz is worth 50% of your grade," they should be able to tell that it's not really a quiz.) But at the same time, it sounds like some brains have this worry horizon where once they start worrying, then it's all they can do. So we need to somehow calibrate the scariness of exams so that only a very small percentage of people fall off the worry horizon, because people who fail from not studying can just start studying. The stable state of not being worried is a good place! ^_^ This kind of reminds me of all of the (non-technical) articles about game addiction and how it's in the designers' best interest to keep everyone hooked but still high-functioning enough that we won't outlaw WoW the way we outlaw harmful, addictive narcotics. Brains are such a mess. ^_^

Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?

Sure, games about physics should be able to present a reasonably accurate model so that if you understand their model, you end up knowing something about physics... but with history:

actually, what's the goal of studying history?

  • if the goal is to do well on tests, we already have a nice model for that, under the name of Anki. Of
... (read more)
5RolfAndreassen
It's true that you don't need a model that lets you form new theories of the downfall of the Empire; but my point is that even the accepted textbook causes would be very hard to model in a way that combines fun, challenge, and even the faintest hint of realism. Take the theory that Rome was brought down partly by climate change; what's the Emperor supposed to do about it? Impose a carbon tax on goats? Or the theory that it was plagues what did it. Again, what's the lever that the player can pull here? Or civil wars; what exactly is the player going to do to maintain the loyalty of generals in far-off provinces? At least in this case we begin to approach something you can model in a game. For example, you can have a dynastic system and make family members more loyal; then you have a tradeoff between the more limited recruiting pool of your family, which presumably has fewer military geniuses, versus the larger but less loyal pool of the general population. (I observe in passing that Crusader Kings II does have a loyalty-modelling subsystem of this sort, and it works quite well for its purposes. Actually I would propose that as a history-teaching game you could do a lot worse than CKII. Kaj, you may want to look into it.) Again, suppose the issue was the decline of the smallholder class as a result of the vast slaveholding plantations; to even engage with this you need a whole system for modelling politics, so that you can model the resistance to reform among the upper classes who both benefit by slavery and run most of your empire. Actually this sounds like it could make a good game, but easy to code it ain't.
9Vaniver
If what they learned about "evolution" comes from Pokemon, then yes.

Have you played the Portal games? They include lots of things you mention... they introduce how to use the portal gun, for example, not by explaining stuff but giving you a simplified version first... then the full feature set... and then there are all the other things with different physical properties. I can definitely imagine some Portal Advanced game when you'll actually have to use equations to calculate trajectories.

Nevertheless... I'd really like to be persuaded otherwise, but the ability to read Very Confusing Stuff, without any working model, and ... (read more)

0Kaj_Sotala
I've played the first Portal game for a bit, and I liked it, but haven't finished it because puzzle games aren't that strongly my thing. I wonder whether not liking them much is a benefit or a disadvantage for an edugame designer. :-) True enough. But I don't think that very much of education consists of trying to teach this skill in the first place (though one could certainly argue that it should be taught more), and having a solid background in other stuff should make it easier when you do get to that point.

I have a similar experience... around two years ago, both my laptop and desktop power supplies died (power surge), leaving me a pII-300... with which I had some "let's be authentic nineties" fun previously, so Win98 and Office 97. Except for the browser (lots of websites didn't even load on IE4-ish browsers), so I ended up with Firefox 3.x (the newest that ran on win98).

It actually took long times with 100% CPU to render web sites. And then further time to scroll them.

My observation is the same as yours: there is nothing better to discourage rand... (read more)

The problem with no notifications is that because you're still in a room where interesting stuff is going on, of course you'll check the chat history and/or join the people already chatting. (Unless you use up willpower not to, but the whole point is using less of that.)

Having a 25 min work + 5 min chat cycle seems to be a good thing though; start working because everyone else went silent is so much easier as going back to the "library" while everyone else is still talking in the lobby. If you're working, don't go there, that's it.

I went through an Optimization course last semester (CS, grad), so it doesn't really qualify as an "out of class experience", nevertheless reading it was quite optional, and, actually, the questions I asked myself were very similar to yours.

Especially in the light of those small remarks textbooks tend to make along the lines of "we don't have any more space here, so if you're interested, the excellent book by X and Y is a very nice read". As if they were referring to some light and entertaining book if the one you were holding weren't r... (read more)

Good idea... especially given that the fact that the readers usually aren't an immortal vampire / pony / wizard makes it relatively complicated to emulate the protagonists.

Not to speak of the fact that much of the fictional rationalist awesomeness comes from applying existing stuff (common sense / science) in a setting where it's not expected to be applied. (See Harry's Gringotts money pump or Missy's sciencey superpowers). It's hard to extrapolate that to our world...

Counterexample: Cory Doctorow's Little Brother & Homeland. Although not really ration... (read more)

just finished reading.

It's kind of sad in a... grand way.

No one remembering anymore what exactly being "human" means. But... what do we expect? I don't see any human values that are not statisfied, it just does not "feel like home" that much. But still, orpbzvat bar bs gur yrffre fhcrevagryyvtraprf naq fgvyy univat n fcrpvny cynpr va zvaq sbe fgne gerx? It's as heart-warming as it gets, in a cold, dark and strange universe.

(If only we could do this well.)

Attach lots of sensors to lots of axons, try to emulate the thing while it's running... for me, it's the on-line method that sounds more plausible compared to the "look at axons with microscopes and try to guess what they do" approach. Nevertheless, imagining a scenario with non-destructive uploads... how many times would you allow people to upload? Ending up with questions like that, I think it's the destructive one that would generate less horrifyingness...

latanius650

If you are trying to do X, surround yourself with people who are also doing X. Takes much less willpower to keep doing it.

On a related note make sure that they are people who are actively doing X, or at least making credible progress towards it not just professing a desire to X. This is an easy mistake to make.

As for this thread: wouldn't upvoting commens that you think are useful for someone else but not for you be actually an indirect case of other-optmizing?

2Matt_Simpson
I think so.
0Decius
With the expectation that others would reciprocate by encouraging behavior that benefits you.

Also consider mint.com. Draws awesome graphs. (It only works for US bank accounts only though...)

latanius290

It's so nice that if you combine the words Alicorn + Twilight you get "let's make everyone else immortal, too" independetly of the universe in question.

9Alicorn
:D
1ikrase
Hahahahhahahahah.

Do we actually have an objective test for the quality of visual imagery? (as compared to subjective quality of it.) What I'm thinking of is something like the mental rotation experiments, proving that in fact there is a representation of images in our heads... but with somewhat more complicated images. Or scenes.

Otherwise... I think I have good imagination abilities (I was also once told so while solving math problems involving rotating cubes), but my subjective quality levels are similar: pictures are somewhat vague, especially compared the ones I can get... (read more)

0DaFranker
I remember an online pseudo-IQ test that had an image of an irregular 3D shape with pictographs on it, and then several 2d images of that shape "unfolded" in various different ways, with only one of those unfolded representations being correct and having the right pictographs in the right places at the right rotations. Does this sound like the kind of test you were asking about? Mentally visualizing where each side of the shape went when unfolding the shape was important for me in solving those problems, and I think they'd be pretty hard to solve mentally even with intense mastery of abstract algebra.

Wow. Thanks for the idea. Just got instantly sleepy in around 20 seconds after installing it...

2Nectanebo
That may be the placebo effect.

theoretically yes... supposing that the smaller and bigger light sources are of equal efficiency and have the same spectrum. The number of photons emitted adds up linearly after all (I think it even works if the spectra are different.)

In practice, as it turns out (... "let's read lots of wikipedia about lighting" project), a 100W bulb produces roughly 1.5 times more lumens per watt than a 40W one, so the "equivalence" is also somewhat questionable... as it's the lumens that count (that being "the perceived lightness that is radiated").

(This thing is called "luminous efficacy", by the way. Am I the only one who thinks that it would make a nice LW post title at first look?)

latanius-10

You're right in a sense that we'd like to avoid it, but if it occurs gradually, it feels much more like "we just changed our minds" (like we definitely don't value "honor" as much as the ancient greeks, etc), as compared to "we and our values were wiped out".

6Vladimir_Nesov
The problem is not with "losing our values", it's about the future being optimized to something other than our values. The details of the process that leads to the incorrectly optimized future are immaterial, it's the outcome that matters. When I say "our values", I'm referring to a fixed idea, which doesn't depend on what happens in the future, in particular it doesn't depend on whether there are people with these or different values in the future.

AIs that can't be described by attributing goals to them don't really seem too powerful (after all, intelligence is about making the world going into some direction; this is the only property that tells apart an AGI from a rock).

1OrphanWilde
Evolution and capitalism are both non-goal-oriented, extremely powerful intelligences. Goals are only one form of motivators.

design an AI so that it can't self-modify

is there at all a clean border between self-modification and simply learning things? We have "design" and "operation" at two places in our maps, but they can be easily mixed up in reality (is it OK to modify interpreted source code if we leave the interpreter alone? what about following verbal instructions then? inventing them? etc...)

-2OrphanWilde
Given that read-only hardware exists, yes, a clean border can be drawn, with the caveat that nothing is stopping the intelligence from emulating itself as if it were modified. However - and it's an important however - emulating your own modified code isn't the same as modifying yourself. Just because you can imagine what your thought processes might be if you were sociopathic doesn't make you sociopathic; just because an AI can emulate a process to arrive at a different answer than it would have doesn't necessarily give it the power to -act- on that answer. Which is to say, emulation can allow an AI to move past blocks on what it is permitted to think, but doesn't necessarily permit it to move past blocks on what it is permitted to do. This is particularly important in the case of something like a goal system; if a bug would result in an AI breaking its own goal system on a self-modification, this bug becomes less significant if the goal system is read-only. It could emulate what it would do with a different goal system, but it would be evaluating solutions from that emulation within its original goal system.
0JoshuaFox
Little consideration has been given to a block on self-modification because it seems that it is impossible. You could do a non-Von Neumann machine, separating data and code, but data can be interpreted as code. Still, consideration should be given to whether anything can be done, even if only as stopgap.
latanius-10

The latter is not necessarily a bad thing though.

2Vladimir_Nesov
It is a bad thing, in the sense that "bad" is whatever I (normatively) value less than the other available alternatives, and value-drifted WBEs won't be optimizing the world in a way that I value. The property of valuing the world in a different way, and correspondingly of optimizing the world in a different direction which I don't value as much, is the "value drift" I'm talking about. In other words, if it's not bad, there isn't much value drift; and if there is enough value drift, it is bad.

... Accelerando by Charles Stross, while not exactly being a scientific analysis, had some ideas like this. It also wasn't bad.

I'm saying that although it isn't ontologically fundamental, our utility function might still build on it (it "feels real enough"), so we might have problems if we try to extrapolate said function to full generality.

2Karl
If something is not ontologically fundamental and doesn't reduce to anything which is, then that thing isn't real.

ought to be canonically reducible

but it most likely isn't. "X computes Y" is a model in our head that is useful to predict what e.g. computers do, which breaks down if you zoom in (qualia appear in exactly what stage of a CPU pipeline?) or don't assume the computer is perfect (how much rounding error is allowed to make the simulation a person and not random noise?)

(nevertheless, sure, the SAUS might not always exist... but above question still doesn't seem to have any LW Approved Unique Solution (tm) either :))

0Karl
Are you saying you think qualia is ontologically fundamental or that it isn't real or what?

One shouldn't confuse there being a huge debate over something with the problem being unsolved

sure, good point. Nevertheless, if I'm correct, there still isn't any Scientifically Accepted Unique Solution for the moral value of animals, even though individuals (like you) might have their own solutions (the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?)

generalize n-th differentials over real numbers

(that was just some random example, it's fractional calculus which I heard a presentation abou... (read more)

0Karl
There isn't any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature. It certainly appear to uniquely follow. That seems easy to answer. Modulo a reduction of computation of course but computation seems like a concept which ought to be canonically reducible.

For the first read the question sounded like it didn't constrain anything in the real world (see the "tree falling in forest" question)... but, in fact, it is relevant because it impacts our moral judgments.

Which says something though about the consistency of our moral judgments... (recently discussed around here).

Although I've read the metaethics sequence, that was a long time ago, but I think I'll put reading it again on my todo list then!

My intuition behind thinking 1 unlikely (yes, it's just an intuition) comes from the fact that we are already bad at generalizing "people-ness"... (see animal rights for example: huge, unsolved debates over morality, combined with the fact that we just care more about human-looking, cute things than non-cute ones... which seems to be pretty arbitary to me). And things will get worse when we end up with entities that con... (read more)

0A1987dM
Huh. I assumed that they would give the same result, at least for sufficiently well-behaved functions. They don't?
-1Karl
One shouldn't confuse there being a huge debate over something with the problem being unsolved, far less unsolvable (look at the debate over free will or worse p-zombies). I have actually solved the problem of the moral value of animals to my satisfaction (my solution could be wrong, of course). As for the problem of dealing with peoples having multiple copies this really seems like the problem of reducing "magical reality fluid" which while hard seems like it should be possible. Well, yes. But in general if you're trying to elucidate some concept in your moral reasoning you should ask yourself the specific reason why you care about that specific concept until you reach concepts that looks like they should have canonical reductions, then you reduce them. If in doing so you end up with multiple possible reductions that probably mean you didn't go deep enough and should be asking why you care about that specific concept some more so that you can pinpoint the reduction you are actually interested in. If after all that you're still left with multiple possible reductions for a certain concept, that you appear to value terminally, and not for any other reasons, then you should still be able to judge between possible reductions using the other things you care about: elegance, tractability, etc. (though if you end up in this situation it probably means you made an error somewhere...) I'm not sure what you're referring to here... Also, looking at the possibilities you enumerate again, 3 appear incoherent. Contradictions are for logical systems, if you have a component of your utility function which is monotone increasing in the quantity of blue in the universe and another component which is monotone decreasing in the quantity of blue in the universe, they partially or totally cancel one another but that doesn't result in a contradiction.

This seems to be one of the biggest problems for FAI... keeping an utility function constant in a self-modifying agent is hard enough, but keeping it the same over a different domain... well, that's real hard.

Actually, there might be three outcomes:

  • we can extrapolate so that it all adds up to normality in when mapped back to the original ontology (unlikely)
  • we can extrapolate in various ways that is consistent with the original onthology & original human brain design, but not unique (which doesn't seem to be a "fail" scenario... we just mig
... (read more)
1Karl
Why exactly do you call 1 unlikely? The whole metaethics sequence argue in favor in 1 (If I understand what you mean by 1 correctly), so what part of that argument do you think is wrong specifically?

I have images because it was super easy to cut them from the lecture notes for the class (press hotkey, draw rectangle, and either ctrl-v or drop file from dock to anki, depending on whether on mac or linux). Also, does latex display work on phones? (ankidroid specifically.) Nevertheless, using latex seems to be the nicer solution indeed, especially if you plan to publish the result (I didn't).

(aand I sent a PM with the link)

0Risto_Saarelma
You can get it to work by generating png images of the Latex fragments into a Dropbox account shared with the phone on PC. Last time I tried it, it was a bit tricky to get working.
0tut
Anki uses latex to create an image, which is then used in the deck. So I imagine that it will work exactly the same way that it would with your copied images.

I've been experimenting lately with an Optimization deck, so far it's mostly about unconstrained optimization, Newton method, Conjugate Gradients, that sort of stuff, with some formulas added as pictures. If you're interested, I can upload it somewhere! (Note: it's for Anki2.)

0Matt_Simpson
I'd like to take a look at it, thanks! You can embed latex code in the anki cards with the tags [latex] code here [/latex] instead of adding the formulas as pictures. I've started working on a sets and functions deck doing just this.

Try k2pdfopt! I use it all of the time with scientific papers, with lots of formulas, and it works quite well. It practically converts the pdf to images and slices them up, outputting another pdf, but the size increase is not too significant (still usable file sizes with multiple-hundred page long books).

0drethelin
Thanks! This isn't actually useful to me since I read almost nothing really hardcore on my phone but it's good to know about.

Oh, yes. I do Dropbox syncing, too (this is the other good thing about org-mode: plain text files). And there might be some truth in the statement that while org-mode is excellent for a single file, things start to be less seamless when it comes to more of them... inter-file links don't seem to be that reliable, for example. Is this the reason for your One Big Org File?

For white on black, it's just (setq default-frame-alist '((background-color . "black") (foreground-color . "white"))) in your .emacs.

Actually, it's kind of typical lesswr... (read more)

0jwhendy
For links, I switched to the org-id module and a unique ID for any new links. It works as long as the file containing the target headline is in the same directory as the file containing the link.

I tried the Android app just after I read your comment (it's a thing I've been putting off for a long time), well... it really doesn't include the "creating nested outlines easily" part I like org-mode for, and the synchronization part also seems to be kind of... strange. Just as you said.

What I really like about it is the minimum effort that it needs to, for example, create a todo item (compared with web-based solutions). Too bad that these todo items usually end up really unorganized. Would be indeed nice to have some interface between, e.g. No... (read more)

Load More