All of darius's Comments + Replies

That writes can affect another session violates my expectations, at least, of the boundaries that'd be set.

3Paul McMillan
The behavior is a bit of an implementation detail. We don't provision more than a single sandbox per user, so the data on disk within that sandbox can overlap when you have multiple concurrent sessions, even though the other aspects of the execution state are separate. I agree this is a bit surprising (though it has no security impact), and we've been discussing ways to make this more intuitive.

Nit: 0.36 bits/letter seems way off. I suspect you only counted the contribution of the letter E from the above table (-p log2 p for E's frequency value is 0.355).

3johnswentworth
Wow, I really failed to sanity-check that calculation. Fixed now, and thankyou!

Agreed. I had [this recent paper](https://ieeexplore.ieee.org/abstract/document/9325353) in mind when I raised the question.

The Landauer limit constrains irreversible computing, not computing in general.

3[anonymous]
On the technology readiness level, I put reversible computing somewhere between von Neumann probes and warp drive. Definitely post-Singularity, likely impossible.

Here's the argument I'd give for this kind of bottleneck. I haven't studied evolutionary genetics; maybe I'm thinking about it all wrong.

In the steady state, an average individual has n children in their life, and just one of those n makes it to the next generation. (Crediting a child 1/2 to each parent.) This gives log2(n) bits of error-correcting signal to prune deleterious mutations. If the genome length times the functional bits per base pair times the mutation rate is greater than that log2(n), then you're losing functionality with every generation.

On... (read more)

An allegedly effective manual spaced-repetition system: flashcards in a shoebox with dividers. You take cards from the divider at one end and redistribute them by how well you recall. I haven't tried this, but maybe I will since notecards have some advantages over a computer at a desk or a phone.

(It turns out I was trying to remember the Leitner system, which is slightly different.)

Radical Abundance is worth reading. It says that current work is going on under other names like biomolecular engineering, the biggest holdup is a lack of systems engineering focused on achieving strategic capabilities (like better molecular machines for molecular manufacturing), and we ought to be preparing for those developments. It's in a much less exciting style than his first book.

Small correction: Law's Order is by David Friedman, the middle generation. It's an excellent book.

I had a similar reaction to the sequences. Some books that influenced me the most as a teen in the 80s: the Feynman Lectures and Drexler's Engines of Creation. Feynman modeled scientific rationality, thinking for yourself, clarity about what you don't know or aren't explaining, being willing to tackle problems, ... it resists a summary. Drexler had many of the same virtues, plus thinking carefully and boldly about future technology and what we might need to do... (read more)

1JenniferRM
Thanks for the correction! I'll leave the Milton/David error in, so your correction reads naturally :-)

I'm not a physicist, but if I wanted to fuse metallic hydrogen I'd think about a really direct approach: shooting two deuterium/tritium bullets at each other at 1.5% of c (for a Coulomb barrier of 0.1 MeV according to Wikipedia). The most questionable part I can see is that a nucleus from one bullet could be expected to miss thousands of nuclei from the other, before it hit one, and I would worry about losing too much energy to bremsstrahlung in those encounters.

I also reviewed some of his prototype code for a combinatorial prediction market around 10 years ago. I agree that these are promising ideas and I liked this post a lot.

Robin Hanson proposed much the same over 20 years ago in "Buy Health, Not Health Care".

0[anonymous]
There's research suggesting that in developing countries, increased healthcare spending doesn't improve health outcomes like longevity. Don't know how good the research is though.
2[anonymous]
As someone living in a universal/governmental healthcare country, I think we are doing this. If I am healthy and working, I am an asset for the state, pay taxes / social security. If I am ill or disabled, they gotta pay me. If I am dead, I don't pay them. Of course it is not ideal first of all because of the usual problem of government: politicians, bureaucrats don't get a dividend from the profits of the state, they are not incentivized to maximize profitability. Secondarily, there are some incentive pitfalls like I am cheaper for them dead than pulling disability pay. Once it would look likely I will never work again their incentive would be providing zero healthcare. So while it is not ideal, the basic idea of people paying something to an organization every month or year when they are healthy and working, and the healthcare costs are paid by that organization so they want to keep their client healthy and working and paying is there, and it can be tweaked. Part of the story is that people even when retired should keep paying. From this angle life insurance is better than social security. However I think there is no workaround for the fact that once you have 5 years to live and extending that to 10 years costs a lot, whatever tax or insurance premiums you would pay in the second 5 years would not cover it and thus the organization has no incentive to extend your life. This can only be done by strict contracts or by politics. A third option is kids. Let's go a big sci-fi here, we make a pill that extends female fertility up to about 60 easy, and thus we can assume most people will have kids again because it can just as treated like an early retirement. The point is, if people have kids, you can treat families as immortal or long-lived persons. You can work out a scheme that if dad's life is not extended the kids will take their insurance elsewhere.
4jacob_cannell
Interesting read. In the same vein. What I was imagining is a computational market, relying on the ability to do lots of complex trades at high speeds, and AI/ML. But much of that difference is explained by the 20 years.

IIRC Doug Orleans once made an ifMUD bot for a version of Zendo where a rule was a regular expression. This would give the user a way to express their guess of the rule instead of you having to test them on examples (regex equality is decidable).

Also I made a version over s-expressions and Lisp predicates -- it was single-player and never released. It would time-out long evaluations and treat them as failure. I wonder if I can dig up the code...

Here's what's helped for me. I had strong headaches that would persist for weeks, with some auras, which my doctor called migraines. (They don't seem to be as bad as what people usually mean by the word.) A flaxseed oil supplement keeps them away. When I don't take enough, they come back; it needs to be at least 15g/day or so (many times more than the 2-3 gelcaps/day that supplement bottles direct you to take). I've taken fish oil occasionally instead.

I found this by (non-blinded) experimenting with different allegedly anti-inflammatory supplements. I'm not a doctor, etc.

0Algon
That's pretty strange. I've never heard of flaxseed oil being effective before. Thanks for the advice, I'll try it out when I can.

Computing: The Pattern On The Stone by Daniel Hillis. It's shorter and seemingly more focused on principles than the Petzold book Code, which I can't compare further because I stopped reading early (low information density).

it's also notable that he successfully predicted the rise of the internet

Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn't give any such timeframe for nanotechnology, I guess it's worth mentioning.)

Radical Abundance, came out this past month.

Added: The most relevant things in the book for this post (which I've only skimmed):

  1. There's been lots of progress in molecular-scale engineering and science that isn't called nanotechnology. This progress has been pretty much along the lines Drexler sketched in his 1981 paper and in the how-can-we-get-there sections of Nanosystems, though. This matches what I saw sitting in on Caltech courses in biomolecular engineering last year. Drexler believes the biggest remaining holdup on the engineering work is how it's

... (read more)

Yes -- in my version of this you do get passed your own source code as a convenience.

If you'd rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket's zillion features would get in the way of interesting source-code analyses. Maybe they'll make the game more interesting in other ways?

There's a Javascript library by Andrew Plotkin for this sort of thing that handles 'a/an' and capitalization and leaves your code less repetitive, etc.

0Armok_GoB
Weird, I just found this random helpful post, marked as down-voted by me and with a strong aversion attached. Fixed the vote, but can't figure out if the aversion is a bug or a feature.

In Einstein's first years in the patent office he was working on his PhD thesis, which when completed in 1905 was still one of his first publications. I've read Pais's biography and it left me with the impression that his career up to that point was unusually independent, with some trouble jumping through the hoops of his day, but not extraordinarily so. They didn't have the NSF back then funding all the science grad students.

I agree that all the people we're discussing were brought into the system (the others less so than Einstein) and that Einstein had t... (read more)

Better examples of outsider-scientists from around then include Oliver Heaviside and Ramanujan. I'm having trouble thinking of anyone recent; the closest to come to mind are some computer scientists who didn't get PhD's until relatively late. (Did Oleg Kiselyov ever get one?)

0komponisto
Again, I don't care whether the person remained an outsider for their entire life; all they need to have done is to have made a contribution while outside. Thus Einstein in the patent office fully counts. Moreover, it is worth noting that Ramanujan was brought to England by the ultra-established G.H. Hardy, and even Heaviside was ultimately made a Fellow of the Royal Society. So even they became "insiders" eventually, at least in important senses.

Yes, that's where I got the figure (the printed book). The opening chapter lists a bunch of other figures of merit for other applications (strength of materials, power density, etc.)

Figure 16.8. (I happened to have the book right next to me.)

Ah -- .1nm is also the C-H or C-C bond length, which comes to mind more naturally to me thinking about the scale of an organic molecule -- enough to make me wonder where the 0.24 was coming from. E.g. a (much bigger) sulfur atom can have bonds that long.

Isn't an H atom more like 0.1nm in diameter? Of course it's fuzzy.

I agree with steven0461's criticisms. Drexler outlines a computer design giving a lower bound of 10^16 instructions/second/watt.

Should there be a ref to http://e-drexler.com/d/07/00/1204TechnologyRoadmap.html ?

Quibbling about words: "atom by atom" seems to have caused some confusion with some people (taking it literally as defining how you build things when the important criterion is atomic precision). Also "nanobots" was coined in a ST:TNG episode, IIRC, and I'm not sure if people in the field use it.

1lukeprog
I've seen this before but now I can't find it. Do you have a link?
4steven0461
Apparently .24nm is twice the Van der Waals radius and .1nm is twice the Bohr radius. I'm not sure which one has a better case for being called the "true radius".
5steven0461
You're thinking of "nanites", I'm pretty sure.

You could grind seeds in a coffee grinder, as BillyOblivion suggests. (I don't because the extra stuff in seeds disagrees with another body issue of mine.) Sometimes I take around 5 gelcaps a day while traveling, which isn't as effective but makes most of the difference for the headaches.

What I do is put on a swimmer's nose clip, drink the oil by alternately taking in a mouthful of water and floating a swallow of oil down on top of that; follow up with a banana or something because I've found taking it on an empty stomach to disagree with me; have a bit mo... (read more)

0BillyOblivion
I didn't usually grind them. Well, not in a grinder. I just ate them and let my molars and stomach acids and gut bacteria do the work.

My headaches mostly went away with daily flaxseed oil or fish oil. I have no particular reason to expect you'd see the same, but it's easy to try. I take 1 or 2 tablespoons of flaxseed oil per day.

1Alicorn
I recently tried drinking oil for the Shangri-La diet and it made me want to puke; is there some tasty preparation you recommend?

Thanks! Yes, I figure one-shot and iterated PDs might both hold interest, and the one-shot came first since it's simpler. That's a neat idea about probing ahead.

I'll return to the code in a few days.

On message passing as described, that'd be a bug if you could do it here. The agents are confined. (There is a side channel from resource consumption, but other agents within the system can't see it, since they run deterministically.)

I hadn't considered doing that -- really I just threw this together because Eliezer's idea sounded interesting and not too hard.

I'll at least refine the code and docs and write a few more agents, and if you have ideas I'd be happy to offer advice on implementing your variant.

I followed Eliezer's proposal above (both players score 0) -- that's if you die at "top level". If a player is simulating you and still has fuel after, then it's told of your sub-death.

You could change this in play.scm.

When you call RUN, one of two things happens: it produces a result or you die from exhaustion. If you die, you can't act. If you get a result, you now know something about how much fuel there was before, at the cost of having used it up. The remaning fuel might be any amount in your prior, minus the amount used.

At the Scheme prompt:

(run 10000 '(equal? 'exhausted (cadr (run 1000 '((lambda (f) (f f)) (lambda (f) (f f))) (global-environment)))) global-environment)
; result: (8985 #t)    ; The subrun completed and we find #t for yes, it ran to exhaustion.

(r
... (read more)
3DavidLS
Oh, okay, I was missing that you never run the agents as scheme, only interpret them via ev. Are you planning on supporting a default action in case time runs out? (and if so, how will that handle the equivalent problem?)
0lessdazed
If you can't act, what happens score-wise?

The only way to check your fuel is to run out -- unless I goofed.

You could call that message passing, though conventionally that names a kind of overt influence of one running agent on another, all kinds of which are supposed to excluded.

It shouldn't be hard to do variations where you can only run the other player and not look at their source code.

3DavidLS
I'm not a native schemer, but it looks like you can check fuel by calling run with a large number and seeing it if fails to return... eg (eq (run 9999 (return C) (return C)) 'exhausted) [note that this can cost fuel, and so should be done at the end of an agent to decide if returning the "real" value is a good idea] giving us the naieve DefectorBot of (if (eq (run 9999 (return C) (return C)) 'exhausted) C D) [Edit: and for detecting run-function-swap-out: (if (neq (run 10000000 (return C) (return C) 'exhausted) C ;; someone is simulating us (if (eq (run 9999 (return C) (return C)) 'exhausted) C ;; someone is simulating us more cleverly D)) ] [Edit 2: Is there a better way to paste code on LW?] Re: not showing source: Okay, but I do think it would be awesome if we get bots that only cooperate with bots who would cooperate with (return C) Re: message passing: Check out http://en.wikipedia.org/wiki/Message_passing for what I meant?

I just hacked up something like variant 3; haven't tried to do anything interesting with it yet.

2JenniferRM
Awesome! The only suggestion I have is to pass in a putative history and/or tournament parameters to an agent in the evaluation function so the agent can do simple things like implement tit-for-tat on the history, or do complicated things like probing the late-game behavior of other agents early in the game. (E.G. "If you think this is the last round, what do you do?")
8DavidLS
Oh cool! You allow an agent to see how their opponent would respond when playing a 3rd agent (just call run with different source code). [Edit: which allows for arbitrary message passing -- the coop bots might all agree to coop with anyone who coops with (return C)] However you also allow for trivially determining if an agent is being simulated: simply check how much fuel there is, which is probably not what we want.

I second the rec for Feynman volume 1: it was my favorite text as a freshman, though the class I took used another one. Since that was in the last millennium and I haven't kept up, I won't comment on other books. Volumes 2 and 3 won't be accessible to beginners.

Yes, tentatively. I've read the textbook, more like given it a first pass, and it's excellent. This should help me stick to a more systematic study. If the video lectures have no transcripts, that'd suck, though (I'm hard of hearing).

O shame to men! Devil with devil damned / Firm concord holds; men only disagree / Of creatures rational

-- Milton, Paradise Lost: not on Aumann agreement, alas

0Document
Are you posting it for the Aumann-agreement meaning or the intended one?
9MixedNuts
Yeah, but humans only exist of creatures rational.

A related example that I, personally, considered science fiction back in the 80s: Jerry Pournelle's prediction that by the year 2000 you'd be able to ask a computer any question, and if there was a humanly-known answer, get it back. Google arrived with a couple years to spare. To me that had sounded like an AI-complete problem even were all the info online.

You bring up cryonics and AI. 25 years ago Engines of Creation had a chapter on each, plus another on... a global hypertext publishing network like the Web. The latter seemed less absurd back then than the first two, but it was still pretty far out there:

One of the things I did was travel around the country trying to evangelize the idea of hypertext. People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “w

... (read more)
4darius
A related example that I, personally, considered science fiction back in the 80s: Jerry Pournelle's prediction that by the year 2000 you'd be able to ask a computer any question, and if there was a humanly-known answer, get it back. Google arrived with a couple years to spare. To me that had sounded like an AI-complete problem even were all the info online.

I know someone who was on dialysis while waiting for a transplant. It was really hard on them, and for a while it looked like they might not pull through. I don't know how common such an experience is.

0wedrifid
I am not sure either but reducing the need for dialysis was certainly what I had in mind when considering 'other than just lives saved' benefits from having spare organs floating around.

A doctor faces a patient whose problem has resisted decision-tree diagnosis -- decision trees augmented by intangibles of experience and judgement, sure. The patient wants some creative debugging, which might at least fail differently. Will they get their wish? Not likely: what's in it for the doctor? The patient has some power of exit, not much help against a cartel. To this patient, to first order, Phil Goetz is right, and your points partly elaborate why he's right and partly list higher-order corrections.

(I did my best to put it dispassionately, but I'm rather angry about this.)

-1Chala
Um... what so you'd rather have diagnoses that are not based upon data? Or a diagnosis which is made up versus no diagnosis? I don't quite understand what you mean. Illnesses in the human body cannot be solved in the same way as an engineering problem, particularly at the margins. Most of the medical knowledge that could be derived without careful and large clinical trials is already known - I'm not sure what you expect a single doctor to do. Furthermore, note that most patients will not die undiagnosed - bar situations, such as in geriatric patients, where many things are so suboptimal that you just can't sort out what is killing them and what is just background noise. It is very rare that "creative debugging" would be of any use at all. Secondly, many patients in a terminal situation often what more medicine. They feel that not treating with aggressive chemotherapy or some such treatment is giving up. This is not always the case, it is often in terminal illnesses that palliative care is the best option and avoiding aggressive treatment will in fact lead to a longer life. No amount of debugging will change that. Let me stress once again that it is not often a patient will die where a diagnosis has not been achieved where the correct diagnosis would have materially changed the outcome.

I've wondered lately while reading The Laws of Thought if BDDs might help human reasoning too, the kind that gets formalized as boolean logic, of course.

This article reminded me of your post elsewhere about lazy partial evaluation / explanation-based learning and how both humans and machines use it.

9Johnicholas
You do manipulate BDDs as a programmer when you deal with if- and cond-heavy code. For example, you reorder tests to make the whole cleaner. The code that you look at while refactoring is a BDD, and if you're refactoring, a sequence of snapshots of your code would be an equivalence proof. This is the lazy partial evaluation post, cut and pasted from my livejournal: Campbell's Heroic Cycle (very roughly) is when the hero experiences a call to adventure, and endures trials and tribulations, and then returns home, wiser for the experience, or otherwise changed for the better. Trace-based just-in-time compilation is a technique for simultaneously interpreting and compiling a program. An interpreter interprets the program, and traces (records) its actions as it does so. When it returns to a previous state (e.g. when the program counter intersects the trace), then the interpreter has just interpreted a loop. On the presumption that loops usually occur more than once, the interpreter spends some time compiling the traced loop, and links the compiled chunk into the interpreted code (this is self-modifying code) then it continues interpreting the (modified, accelerated) program. Explanation-based learning is an AI technique where an AI agent learns by executing a general strategy, and then when that strategy is done, succeed or fail, compressing or summarizing the execution of that strategy into a new fact or item in the agent's database. In general, if you want to make progress, it seems (once you phrase it that way) just good sense that, any time you find yourself "back in the same spot", you should invest some effort into poring over your logs, trying to learning something - lest you be trapped in a do loop. However, nobody taught me that heuristic (or if they tried, I didn't notice) in college. What does "back in the same spot" mean? Well, returning from a recursive call, or backjumping to the top of an iterative loop, are both examples. It doesn't mean you haven't

The slowest phase in a nonoptimizing compiler is lexical scanning. (An optimizer can usefully absorb arbitrary amounts of effort, but most compiles don't strictly need it.) For most languages, scanning can be done in a few cycles/byte. Scanning with finite automata can also be done in parallel in O(log(n)) time, though I don't know of any compilers that do that. So, a system built for fast turnaround, using methods we know now (like good old Turbo Pascal), ought to be able to compile several lines/second given 1 kcycle/sec. Therefore you still want to reco... (read more)

I can't make it. Anyone going through Burbank would be welcome to stop by my place for a chat, though -- it's quiet here. Email withal@gmail.com for the address.

A idealized free market is that of selfish rational agents competing (with a few extra condition I'm skipping). I'm moderately confident this could work pretty ok in the absence of "general" (if such a thing exists) or perhaps human "intelligence", but I'm not familiar enough with simulations of markets to be certain.

Eric Baum's papers, among others, show this kind of thing applied to AI. There doesn't seem to have been much followup.

Comparative Ecology: A Computational Perspective compares this idea to the human economy and biologic... (read more)

Doug Orleans told me once of a version like this he made to be played with an IRC or MUD bot (I forget which). A rule was a regular expression. (This came up when I mentioned doing it with Lisp s-expressions for the koans instead.)

About this article's tags: you want dark_arts, judging by the tags in the sidebar. The 'arts' tag links to posts about fiction, etc.

ObDarkArts101: Here's a course that could actually have been titled that:

Writing Persuasion (Spring 2011) A course in persuasive techniques that do not rely on overt arguments. It would not be entirely inaccurate to call this a course in the theory, practice, and critique of sophistry. We will explore how putatively neutral narratives may be inflected to advance a (sometimes unstated) position; how writing can exploit reader

... (read more)
9orthonormal
(continued) On the first day, they teach you how to quote selectively...

There might be more agreement here than meets the eye. Drexler often posts informatively and approvingly about progress in DNA nanotechnology and other bio-related tech at http://metamodern.com ; this is the less surprising when you remember his very first nanotech paper outlined protein engineering as the development path. Nanosystems is mainly about establishing the feasibility of a range of advanced capabilities which happen to not already be done by biology, and for which it's not obvious how it could. Biology and its environment being complicated and ... (read more)

Load More