Comment author: Faustus2 29 September 2015 06:54:27AM 1 point [-]

I'll be there too. Can't wait :)

Comment author: Manfred 22 July 2015 07:45:30PM *  2 points [-]

The things you directly need are algorithmic complexity theory (Classic textbook: Li and Vitanyi) and some way of understanding proofs (Probably start with regular mathematical logic / model theory, not sure of a standout textbook here, maybe Manin?, then look into modal logic, classic textbook by Boolos).

Prerequisites for those are mathematical logic, set theory, probability theory, and some amount of discrete math.

Comment author: Faustus2 22 July 2015 08:23:41PM 0 points [-]

Thank you, I'm grateful for your time.

Mathematics for AIXI and Gödel machine

0 Faustus2 22 July 2015 06:52PM

Just a quick question, does anyone know which math topics I'd have to learn to understand the work on AIXI and the Gödel machine? Any pointers or suggestions would be appreciated. 

Comment author: Faustus2 27 March 2015 09:00:55PM 0 points [-]

I appreciate the great feedback from all of you, thank you :) I do have another quick question, but it's of a lower priority. As of right now, I currently hold no degree. I've always been kind of Interested in the MIRI workshops, but I've always been nervous about signing up to one because: 1. I'm not sure if a degree would be necessary to keep up with the level of work people are to be involved in at the workshop and 2. In case my first point turned out to be true, I certainly wouldn't want a student who had no real formal (I've still learning computability and logic and have started to branch out into set theory and similarly related fields) experience in the kind of Math MIRI deals with to be a nuisance to people trying to get some work done by asking them questions all the time. So here is my question stated in full 'Would I be allowed to participate in a MIRI workshop, given that I have no degree as of right now, and could this factor be to the detriment of others there?' Again, a lower priority question, but any comments or thoughts from users would be welcomed graciously :)

Just a casual question regarding MIRI

12 Faustus2 22 March 2015 08:16PM

Currently I am planning to start a mathematics degree when I enter university, however my interest has shifted largely to computational neuroscience and related fields, so I'm now planning to switch to an AI degree when I go to study. Having said that, MIRI has always posed interesting problems to me, and I have entertained the thought of trying to do some work for MIRI before. And so my question boils down to this: Would there be any problem with taking the AI degree if I ever wanted to try my hand at doing some math for MIRI? Is a maths degree essential or would an AI degree with a good grasp on mathematics related to MIRI work just as well? Any thoughts or musings would be appreciated :)

Comment author: Vulture 10 January 2014 01:15:19AM 2 points [-]

Possibly a stupid question, but what is the relationship between your work and the OpenWorm project?

Comment author: Faustus2 24 January 2015 05:02:29PM 1 point [-]

I would also like to know the answer to this question.

In response to MIRI Research Guide
Comment author: Faustus2 04 December 2014 11:24:08PM 2 points [-]

A quick comment, for the segment on tiling agents, on the MIRI site the recommended reading (not counting any MIRI papers) is 'a mathematical introduction to logic' by Enderton. But on this page, it instead recommends Chang and Keislers Model theory. Can this be taken to mean that both works are important recommended reading? Are they both of equal worth or should (or rather, could) one be prioritised over the other?

Comment author: Faustus2 24 September 2014 11:21:51AM 5 points [-]

I recommend 'neurophilosophy' by Patricia Churchland, but a really good general overview for theories on consciousness is the Blackwells companion to consciousness. Sorry for the long HTML, but there is a link for a pdf version here that might be of use to you (or anyone): http://cies-fsc.googlecode.com/svn/trunk/FSC09/ChangeBlindness/biblio.complémentaire/1405120193%20-%20Max%20Velmans%20-%20The%20Blackwell%20Companion%20to%20Consciousness%20%5B2007%5D.pdf

Comment author: Faustus2 14 December 2013 10:36:36PM 0 points [-]

Hello to you all, I am Chris.

I live in England and attend my local High school (well, in England we call the senior years/curriculums a sixth form). I take Mathematics, Further mathematics, physics and philosophy. I actually happened upon Lesswrong two years ago, when I was 16, whilst searching for interesting discussions on worldviews. Although I had never really been interested in rationality (up until that point I hasten to add!), I had a seething urge to sort out my worldview as quickly as I could. I just got so sick of the people who went to sunday school coming out with claims about the universe that didn't jive with the understanding of modern physics. So I read the reductionism sequence and realised I was a reductionist. The way Eliezer 'spelled it out' just really struck me as a great way to say what I had started to feel. Shortly afterwards naturalism , or rather metaphysical naturalism, became my first great love. I have a good collection of friends, but none of them have really cared for 'waxing on worldviews' like me. I guess I'm just really happy that I get to speak with a community that has stuff in common with me (not just worldviews, but other cool topics as well). I guess camaraderie is eagerly sought. I would love to talk with people of my age group (I suppose 15-24) but of course I should love to meet with anyone with a similar mindset to me. I live near Reading. If anyone would like to speak with me, whether it be through Lesswrong, Facebook or just meeting up for a chat, just message me and I shall do my utmost to entertain/be friendly with you. :)

Comment author: OnTheOtherHandle 24 July 2012 06:41:58PM *  9 points [-]

I know this was over a year ago, but Avatar: The Last Airbender would actually be fairly easy. We could simply shoe-horn rationalist values onto the personalities of the four elements.

Fire: Tsuyoku Naritai and The Arrogance of Einstein: they are a proud, ambitious people, who know what they want, and push hard to get it, who know their strengths and dispense with cloaks of modesty. They are highly consequentialist, and are not at all risk averse, preferring 50-50 odds of glory or ruin to 100% chance of modest prosperity. They understand the value of inspiring and organizing large bodies of people to make things happen, but don’t really give a damn about the rationality of their underlings, making up powerful ideologies to get people to do what they want. This doesn't necessarily make them evil, but it puts them at risk. They also tend to be overly dismissive of “purely” epistemic rationality, and narrowly seek out only those truths that they now believe will help them with their goals. They have a problem with thinking they can calculate their way out of ethical injunctions. Thus, they’re vulnerable to fall-out from black swan bets and unknown unknowns, but when that’s not the case, they would be the master instrumental rationalists of the Avatar world.

Water: Flexibility, the right kind of humility, allowing the flow of evidence to push them toward the right conclusions. They tend to have few pre-commitments to particular beliefs, and are good at actually changing their mind but are often too passive, and too emotional. Sometimes they can allow their values to be pushed around by empirical evidence, bleeding together "is" and "ought" and becoming less confident in both. Some of them might have a problem with doubting too much – giving undue weight to improbable possibilities, or having too many reservations about a slam-dunk question. They have problems with perpetual uncertainty, and tend to be too risk-averse to take the large, ambitious gambles that regularly pay off for the Fire Nation. Because of all this uncertainty, they are uncomfortable with large, overarching systems, including ethical systems and decision theories. They tend to just follow empathy and intuition in their actions, and don’t stress about fitting their actions to a theory. They never cause spectacularly awful damage like the Fire Nation, but at the same time never create massive progress. They just flow forward.

Earth: People who have no problem calling a spade a spade, and are hands-down the least vulnerable to the problems of chronic uncertainty, undiscriminating skepticism, and questions like “Does reality even exist?” or “What is truth, truly?”. They don’t always rely on just naïve common sense, but they more than any other nation are aware that traditions, laws, and ethical injunctions evolved for a reason. They will always carefully examine Chesterton’s Fence before tearing it down, but if they do, they will not be half-assed about it. They have methodical, exhaustive processes in place to adopt and change their beliefs, which makes them slow-moving but very confident. They stay “close to the ground,” both epistemically and instrumentally – they don’t spend huge amounts of time discussing the Problem of Induction or Torture vs. Dust Specks. They are a no-nonsense, “see it with my own eyes” kind of people. This makes them very resistant to woo and superstition and belief in belief, but also very resistant to true beliefs that are very far mode or rely on obscure evidence or extensive philosophy.

Air: These are far mode thinkers, philosophers and mathematicians by trade. They have an unusual emotional detachment to the world and to themselves, which allows them more than any other element to recognize and accept the contradictions in their own mind. They have a tendency to always zoom out, to go meta, to ask “Why?”. They’re the ones in the Avatar world who would spend hours debating their world’s version of quantum mechanics, who would try to formalize Occam’s Razor, who would have very strong opinions on Torture vs. Dust Specks. They are obsessed with universality and general patterns – and with all the strange, far-out conclusions they seem to imply. However, they tend to forget reality checks and boring, on-the-ground data collection, and are more likely than anyone else to fall prey to the potential nuttiness of abstract reasoning. While their robust philosophical frameworks allow them to accept very strange and speculative probabilities, their general personalities make it all a game to them. They are unlikely to do much about their beliefs, which might be a good thing considering the aforementioned nuttiness. Arguably the most intellectual and least effective of the elements.

Comment author: Faustus2 21 September 2013 05:37:04PM 1 point [-]

Dear God, that idea is beautiful!! My good sir, have you ever thought about creating this masterpiece you speak of? (I'm not pressuring you, I would just like to know) :)

View more: Next