Comment author: PeterCoin 29 May 2016 08:14:26AM 0 points [-]

I'm confused here: You seem to be analyzing a troubleshooting process. How exactly did the troubleshooting process fail? I can see that there's some criticisms of what was done. But I don't see how this troubleshooting process resulted in disaster.

Comment author: fowlertm 29 May 2016 04:52:03PM 0 points [-]

Because I missed numerous implications, needlessly increased causal opacity, and failed to establish a baseline before I started fiddling with variables. Those are poor troubleshooting practices.

LINK: Performing a Failure Autopsy

1 fowlertm 27 May 2016 02:21PM

In which I discuss the beginnings of a technique for learning from certain kinds of failures more effectively:

"What follows is an edited version of an exercise I performed about a month ago following an embarrassing error cascade. I call it a ‘failure autopsy’, and on one level it’s basically the same thing as an NFL player taping his games and analyzing them later, looking for places to improve.

But the aspiring rationalist wishing to do the something similar faces a more difficult problem, for a couple of reasons:

First, the movements of a mind can’t be seen in the same way the movements of a body can, meaning a different approach must be taken when doing granular analysis of mistaken cognition.

Second, learning to control the mind is simply much harder than learning to control the body.

And third, to my knowledge, nobody has really even tried to develop a framework for doing with rationality what an NFL player does with football, so someone like me has to pretty much invent the technique from scratch on the fly.  

I took a stab at doing that, and I think the result provides some tantalizing hints at what a more mature, more powerful versions of this technique might look like. Further, I think it illustrates the need for what I’ve been calling a “Dictionary of Internal Events”, or a better vocabulary for describing what happens between your ears."

Talk today at CU Boulder

1 fowlertm 05 April 2016 04:26PM

I'm giving a talk today on the future of governance at the University of Colorado at Boulder, in room ECON 117, at 5 p.m.

While the talk itself isn't concerned with rationality, I'd still be interested in networking with any LW sorts who happen to be in the area.

Best,

-Trent Fowler

Comment author: fowlertm 06 December 2015 04:22:28PM 0 points [-]

So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.

Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).

More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or evernote entries based on my script. It has to be extremely dense to fit in the margins of a book, and must capture distinct commands like "make a single cloze deletion card for this sentence" and "make four separate cards for this sentence, cloze deleting a different piece of information for each card but otherwise leaving everything intact" and so on.

Any thoughts?

Comment author: iarwain1 04 October 2015 09:08:31PM 2 points [-]

Why do you say Carnegie Mellon? I'm assuming it's because they have the Center for Formal Epistemology and a very nice-looking degree program in Logic, Computation and Methodology. But don't some other universities have comparable programs?

Do you have direct experience with the Carnegie Mellon program? At one point I was seriously considering going there because of the logic & computation degree, and I might still consider it at some point in the future.

Comment author: fowlertm 12 October 2015 03:56:48PM 1 point [-]

I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).

I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.

I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

Comment author: fowlertm 04 October 2015 04:07:43PM 4 points [-]

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

Comment author: fowlertm 01 June 2015 03:15:20AM 1 point [-]

Data point/encouragement: I'm getting a lot out of these, and I hope you keep writing them.

I'm one of those could-have-beens who dropped mathematics early on despite a strong interest and spent the next decade thinking he sucked at math before he rediscovered numerical proclivites in his early 20's because FAI theory caused him to peek at Discrete Mathematics.

Two meetups in Denver/Boulder Colorado

0 fowlertm 05 May 2015 12:58AM

There will be a rationalist meetup in Denver tomorrow at 7:00 pm at Darcy's Pub, 4955 S Ulster St. #103 Denver, CO 80237

 

This Saturday I'll be giving a presentation on the control problem for superintelligent agents. It'll be from 2-4 at the Boulder Hacker Space, 1965 33rd Street, UNIT B, Boulder, CO.

 

LW-ish meetup in Boulder, CO

1 fowlertm 10 March 2015 02:32PM

This Saturday at the Hellems Arts and Sciences building, room 185 at the University of Colorado at Boulder, there will be a presentation on Intelligence Explosion dynamics.

Hope to see anyone there that can make it. 

In response to FOOM Articles
Comment author: lukeprog 05 March 2015 09:58:13PM 4 points [-]

Besides Superintelligence, the latest "major" publication on the subject is Yudkowsky's Intelligence explosion microeconomics. There are also a few articles related to the topic at AI Impacts.

In response to comment by lukeprog on FOOM Articles
Comment author: fowlertm 06 March 2015 02:41:37AM 3 points [-]

Both unknown to me, thanks :)

View more: Next