Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Elo 06 September 2017 08:31:52PM 2 points [-]

You might want to make this a meetup not a discussion post. That will alert people who are geographically local of the event

Comment author: fowlertm 07 September 2017 03:13:28PM 0 points [-]

That hadn't even occurred to me, thank you! Do you think it'd be inappropriate? This isn't a LW specific meetup, just a bunch of tech nerds getting together to discuss this huge tech project I just finished.

Come check out the Boulder Future Salon this Saturday!

1 fowlertm 06 September 2017 03:49PM
I'm giving a talk on the STEMpunk Project this Saturday at the Boulder Future Salon:
I love BFS and I would encourage you to come check it out if you're in the area.
Let me tell you a story which illustrates why I think they're a valuable group. Once upon a time I went to a presentation there given by a member who'd written a program that generates artificial music. As we were waiting around one of the other guys (whose name I can't remember off the top of my head) just randomly handed me a book and said "you'd probably get a kick out of this."
It was Rudolph Carnap's "The Logical Structure of The World". I read the introduction, thumbed through it a bit, and we had a brief conversation about its relevance to philosophy and to recondite areas of software engineering like database design.

Reflecting on this episode later I realized how remarkable it was. It's not like this other person knew me very well, but by the mere fact that I'd walked through the door he assumed I'd be able to read a book like this and that I'd want to.
I have encountered precious few places like this anywhere.
There wound up being a guitar in the facility, and later in that same meetup I had a duel with the software my friend had created.
Any place where you can find robot music and logical positivism is a place worth exploring.
Comment author: Gyrodiot 05 November 2016 06:20:21PM 3 points [-]

Hello, fellow AIMA reader :-) I'm positive a fair number of LW members have read the book, or at least the first chapters. I'm one of them!

How do you want to communicate? You can ask your questions here, or exchange direct messages, or chat... the possibilities are endless!

Comment author: fowlertm 05 November 2016 11:17:22PM 1 point [-]

Thanks! I suppose I wasn't as clear as I could have been: I was actually wondering if there are any people who are reading it currently, who might be grappling with the same issues as me and/or might be willing to split responsibility for creating Anki cards. This textbook is outstanding, and I think there would be significant value in anki-izing as much of it as possible.

Anyone else reading "Artificial Intelligence: A Modern Approach"?

1 fowlertm 05 November 2016 03:22PM

I'm almost done with the third chapter of AIMA by Russell and Norvig. Is anyone else reading it? It'd be nice to have someone to talk the concepts over with, and perhaps share Anki-card-creation duties. 

Comment author: PeterCoin 29 May 2016 08:14:26AM 0 points [-]

I'm confused here: You seem to be analyzing a troubleshooting process. How exactly did the troubleshooting process fail? I can see that there's some criticisms of what was done. But I don't see how this troubleshooting process resulted in disaster.

Comment author: fowlertm 29 May 2016 04:52:03PM 0 points [-]

Because I missed numerous implications, needlessly increased causal opacity, and failed to establish a baseline before I started fiddling with variables. Those are poor troubleshooting practices.

LINK: Performing a Failure Autopsy

1 fowlertm 27 May 2016 02:21PM

In which I discuss the beginnings of a technique for learning from certain kinds of failures more effectively:

"What follows is an edited version of an exercise I performed about a month ago following an embarrassing error cascade. I call it a ‘failure autopsy’, and on one level it’s basically the same thing as an NFL player taping his games and analyzing them later, looking for places to improve.

But the aspiring rationalist wishing to do the something similar faces a more difficult problem, for a couple of reasons:

First, the movements of a mind can’t be seen in the same way the movements of a body can, meaning a different approach must be taken when doing granular analysis of mistaken cognition.

Second, learning to control the mind is simply much harder than learning to control the body.

And third, to my knowledge, nobody has really even tried to develop a framework for doing with rationality what an NFL player does with football, so someone like me has to pretty much invent the technique from scratch on the fly.  

I took a stab at doing that, and I think the result provides some tantalizing hints at what a more mature, more powerful versions of this technique might look like. Further, I think it illustrates the need for what I’ve been calling a “Dictionary of Internal Events”, or a better vocabulary for describing what happens between your ears."

Talk today at CU Boulder

1 fowlertm 05 April 2016 04:26PM

I'm giving a talk today on the future of governance at the University of Colorado at Boulder, in room ECON 117, at 5 p.m.

While the talk itself isn't concerned with rationality, I'd still be interested in networking with any LW sorts who happen to be in the area.


-Trent Fowler

Comment author: fowlertm 06 December 2015 04:22:28PM 0 points [-]

So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.

Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).

More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or evernote entries based on my script. It has to be extremely dense to fit in the margins of a book, and must capture distinct commands like "make a single cloze deletion card for this sentence" and "make four separate cards for this sentence, cloze deleting a different piece of information for each card but otherwise leaving everything intact" and so on.

Any thoughts?

Comment author: iarwain1 04 October 2015 09:08:31PM 2 points [-]

Why do you say Carnegie Mellon? I'm assuming it's because they have the Center for Formal Epistemology and a very nice-looking degree program in Logic, Computation and Methodology. But don't some other universities have comparable programs?

Do you have direct experience with the Carnegie Mellon program? At one point I was seriously considering going there because of the logic & computation degree, and I might still consider it at some point in the future.

Comment author: fowlertm 12 October 2015 03:56:48PM 1 point [-]

I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).

I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.

I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

Comment author: fowlertm 04 October 2015 04:07:43PM 4 points [-]

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

View more: Next