All of fowlertm's Comments + Replies

fowlertm10

YouTube can generate those automatically, or you can rip the .mp4 with an online service (just Google around, there are tons), then pass it to something like Otter.ai

fowlertm90

We recently released an interview with independent scholar John Wentworth:

It mostly centers around two themes: "abstraction" (forming concepts) and "agency" (dealing with goal-directed systems). 

Check it out!

2Thomas Kwa
Is there an AI transcript/summary?

I'm not much of a LWer these days, but I do co-host a podcast on philosophy and emerging technologies which has a growing library of interviews with LWers:

https://www.youtube.com/@futuratipodcast5130/videos

I suppose I'm interested in both, but that reference is very helpful. I'm also vaguely aware of some literature on what is called "private governance" that would be germane to this discussion. 

Interesting claim. We specifically asked him that and he didn't think that was the case, but you could be right!

1p.b.
There is very steady technological progress in both. And generally more potential. But that's only the technological side, where I think leap-frogging is likely or already happened.  I think there are very significant political hurdles to actually applying genome synthesis or gene editing for intelligence in humans. He probably rightly expects that those won't be overcome, while embryo selection has an easier "in" via IVF where you have to select an embryo anyway.

My admin pointed out the RSS feed (which I assume is what you found) and he's going to see if there's a way to make subscribing easier. 

Thanks for bringing this to my attention!

I'm looking for a really short introduction to light therapy and a rig I can put in my basement-office. Over the years I've noticed my productivity just falls off a goddamn cliff after sundown during the winter months, and I'd like to try to do something about it. 

After the requisite searching I see a dozen or so references across lesswrong, and was wondering if someone could just tell me how the story ends and where I can shop for bulbs. 

For the most part I was thinking about just making things brighter, but I'm open to trying red-light therapy too if people have had success with that.  

3Matt Goldenberg
I like Ben Kuhn's solution in this comment: https://www.benkuhn.net/lux/#comment-1595033477 A few 7-way splitters and a whole lot of 100 watt equivalent LEDs.

Thanks for the recommendations. One thing that would is just knowing what this is called. Do your books give it a name?

Not yet. That's part of what we're hoping to learn about here.

I like that idea too. How hard is it to publish in academic journals? I don't have more than a BS, but I have done original research and I can write in an academic style.

0IlyaShpitser
Pretty hard, I suppose. ---------------------------------------- It's weird, though, if you are asking these types of questions, why are you trying to run an institute? Typically very senior academics do that. (I am not singling you out either, I have the same question for folks running MIRI).

A post-mortem isn't quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.

https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/

This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg

0Lumifer
Neat little names, I see. Thank you, I'll pass on the jpg awesomeness.

Different reasons, none of them nefarious or sinister.

I emailed a technique I call 'the failure autopsy' to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful 'I'll read this when I get a chance" and never got back to me.

I'm not sure why I was turned down for a MIRIx workshop; I'm sure I could've managed to get some friends together to read papers and write ideas on a whiteboard.

I've written a few essays for LW the reception of which were lukewarm. Don't know if I'm just bad at picking topics of interest or if it... (read more)

0ChristianKl
From the outside view a person who has no luck building contacts with existing institutions is unlikely to be a good person to start a new institute. Of course getting someone like Eric S. Raymond to be open to write a book with you is a good sign.
1IlyaShpitser
Try publishing in mainstream AI venues? (AAAI has some sort of safety workshop this year). I am assuming if you want to start an institute you have publishable stuff you want to say.
0Lumifer
Ahem. The rest of the world calls it a post-mortem. See e.g. this. So you do not know why. Did you try to figure it out? Do a post-mortem, maybe?

I hadn't known about that, but I came to the same conclusion!

I gave that some thought! LW seems much less active than it once was, though, so that strategy isn't as appealing. I've also written a little for this site and the reception has been lukewarm, so I figured a book would be best.

2[anonymous]
We're now a lot more active at LW2.0! Some of my stuff which wasn't that popular here is getting more attention there. Maybe you could try it too?

That's not a bad idea. As it stands I'm pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I'll want to move forward with the institute, though, and it seems wise to begin thinking about that now.

I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:

"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."

Which is exactly what I was proposing.

I have tried for years to... (read more)

1Lumifer
Do you know why?
2John_Maxwell
Maybe your mistake was to write a book about your experience of self-study instead of making a series of LW posts. Nate Soares took this approach and he is now the executive director of MIRI :P

You're right. Here is a reply I left on a Reddit thread answering this question:

This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock).

I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hang... (read more)

(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

(2) There is a profound degree of technical talent here in central Colorado which doesn't currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.

2turchin
You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office. There is no need to start any institute if you don't have any dedicated group of people around. Institute consisting of one person is something strange.
5gwern
You know, you could do that. By giving them the money.

That hadn't even occurred to me, thank you! Do you think it'd be inappropriate? This isn't a LW specific meetup, just a bunch of tech nerds getting together to discuss this huge tech project I just finished.

Thanks! I suppose I wasn't as clear as I could have been: I was actually wondering if there are any people who are reading it currently, who might be grappling with the same issues as me and/or might be willing to split responsibility for creating Anki cards. This textbook is outstanding, and I think there would be significant value in anki-izing as much of it as possible.

Because I missed numerous implications, needlessly increased causal opacity, and failed to establish a baseline before I started fiddling with variables. Those are poor troubleshooting practices.

So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.

Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).

More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or ev... (read more)

I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).

I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.

I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

4Vika
I think it depends more on specific advisors than on the university. If you're interested in doing AI safety research in grad school, getting in touch with professors who got FLI grants might be a good idea.
3iarwain1
Why do you say Carnegie Mellon? I'm assuming it's because they have the Center for Formal Epistemology and a very nice-looking degree program in Logic, Computation and Methodology. But don't some other universities have comparable programs? Do you have direct experience with the Carnegie Mellon program? At one point I was seriously considering going there because of the logic & computation degree, and I might still consider it at some point in the future.

Data point/encouragement: I'm getting a lot out of these, and I hope you keep writing them.

I'm one of those could-have-beens who dropped mathematics early on despite a strong interest and spent the next decade thinking he sucked at math before he rediscovered numerical proclivites in his early 20's because FAI theory caused him to peek at Discrete Mathematics.

0JonahS
Thanks :-).

Both unknown to me, thanks :)

Why? What's wrong with wanting to be masculine?

0PhilGoetz
If it were wrong, it would be a problem, not problematic. That defies the dictionary definition, but "problem" can mean something with a simple solution that hasn't yet been implemented, while "problematic" connotes a persistent problem with no easy solution. The difficulties with it are already listed in the post, as they're the motivation for the post. Though it might be more fair to say gender is problematic.

Interesting tie-in, thanks.

Incidentally, how cool would it be to be able to say "my epistemology is the most advanced"? If nothing else it'd probably be a great pickup line at LW meetups.

Agreed. I think in light of the fact that a lot of this stuff is learned iteratively you'd want to unpack 'basic mathematics'. I'm not sure of the best way to graphically represent iterative learning, but maybe you could have arrows going back to certain subjects, or you could have 'statistics round II' as one of nodes in the network.

It seems like insights are what you're really aiming at, so maybe instead of 'probability theory' you have a node for 'distributions' and 'variance' at some early point in the tree then later you have 'Bayesian v. Frequentist reasoning'.

This would help also help you unpack basic mathematics, though I don't know much about the dependencies either. I hope too, soon :)

I thought of that as well, it does need some work done in terms of presentation. It'd be a good place to start, yes.

My two cents: I studied math pretty intensively on my own and later started programming. To my pleasant surprise, the thinking style involved in math transmitted almost directly over into programming. I'd imagine that the inverse is also true.

3[anonymous]
Indeed, many people cross forward and backward between the two.

I'm sorry I missed this and hope it went well. Work has been chaotic lately, but I absolutely support a LW presence in Denver. I've tried once before to get a similar group off the ground, and would be happy to help this one along with presentations, planning, rationalist game nights, whatever.

0TheStevenator
I'd love it if you could attend. The time is flexible if your schedule needs a little wiggle room.

Actually, I folded it into another group called the Boulder Future Salon, which doesn't deal exclusively with x-risk but which has other advantages going for it, like a pre-existing membership.

How would you recommend responding?

2chaosmage
I think I'd point out that he's a fairly public person, which both should increase trust and gives more material for ad hominem attacks. And once someone else has dragged the discussion down to a personal level, you might as well throw in appeals to authority with Elon Musk on AI risk, i.e. change the subject.

I think I'm basically prepared for that line of attack. MIRI is not a cult, period. When you want to run a successful cult you do it Jim-Jones-style, carting everyone to a secret compound and carefully filtering the information that makes it in or out. You don't work as hard as you can to publish your ideas in a format where they can be read by anyone, you don't offer to publicly debate William Lane Craig, and you don't seek out the strongest versions of criticisms of your position (i.e. those coming from Robin Hanson).

Eliezer hasn't made it any easier on ... (read more)

1chaosmage
Sure MIRI isn't a cult, but I didn't say it was. I pointed out that Eliezer does play a huge role in it and he's unusually vulnerable to ad hominem attack. If anyone does that, your going with "whatever his flaws" isn't going to sound great to your audience.

"Note that AI is certainly not a great filter: an AI would likely expand through the universe itself"

I was confused by this, what is it supposed to mean? Off the top of my head it certainly seems like there is sufficient space between 'make and AI that causes the extinction of the human races or otherwise makes expanding into space difficult' and 'make an AI that causes the extinction of the human race but which goes on to colonize the universe' for AI to be a great filter.

3[anonymous]
It rests on the hypothesis that the AI is not only dangerously intelligent but able to self-improve to levels where it can more-or-less direct an entire civilization's worth of material infrastructure towards its own goals. At that point, it would have an easy time getting a space program going, mining resources from the rest of its solar system, and eventually, achieving interstellar existence (via the sheer patience to cross interstellar distances at sublight speeds).

The universe has a limited amount of free energy. For almost any goal or utility function that an AI had, it would do better the more free energy it had. Hence, almost every type of hyper-intelligent AI that could build self-replicating nanobots would quickly capture as much free energy as it could, meaning it would likely expand outwards at near the speed of light.

At the very least, you would expect a hyper-intelligent AI to "turn off stars" or capture there free energy to prevent such astronomical waste of finite resources.

This comment is a poorly-organized brain dump which serves as a convenient gathering place for what I've learned after several days of arguing with every MIRI critic I could find. It will probably get it's own expanded post in the future, and if I have the time I may try to build a near-comprehensive list.

I've come to understand that criticisms of MIRI's version of the intelligence explosion hypothesis and the penumbra of ideas around it fall into two permeable categories:

Those that criticize MIRI as an organization or the whole FAI enterprise (people mak... (read more)

A good point, I must spend some time looking into the FOOM debate.

I've heard the singularity-pattern-matches-religious-tropes argument before and hadn't given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I'm acquainted with. I'm less sure that it's true of Kurzweil's brand of futurism.

Correct, I've been pursuing that as well.

Only the IE as defended by MIRI; it'd be a much longer talk if I wanted to defend everything they've put forward!

2[anonymous]
Short duration hard takeoff, A la That Alien Message? That's one of the hardest claims for MIRI to justify.
6[anonymous]
I used ClickCharts to make the diagrams.

For those interested, I ended up donating to the Brain Preservation Foundation, MIRI, SENS, and the Alzheimer's Disease Research Fund.

More detail here:

http://rulerstothesky.wordpress.com/2014/04/25/in-memorium/

Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.

Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.

Relevant post:

http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html

7Viliam_Bur
What happens when you try to replicate what your boss is doing? For example when you decide to start your own competing company. Then I suspect it would be useful to know the truths like "my boss always says X, but really does Y when this situation happens", so that when the situation happens, you remember to do Y instead of X. Even if for an employee, saying "you always say X, but you actually do Y" to your boss would be dangerous. So, some truths may be good to know, while dangerous to talk about in front of people who have a negative reaction to hearing them. You may remember that "X" is the proper thing to say to your boss, and silently remember that "Y" is the thing that probably contributes to the success in the position of your boss. Replacing your boss is not the only situation where knowing the true boss-algorithm is useful. For example knowing the true mechanism how your boss decides who will get bonus and who will get fired.
7CronoDAS
So saving people 30 and younger so they can die at 80 instead isn't good enough...

The Brain Preservation Foundation was one of the first charities I thought of, I'll definitely be considering them.

Load More