He doesn't need to stall for time to transfigure. He could have already been doing it over the last two chapters.
I have one of these. Can confirm, pretty good relative to other similarly priced knives I've tried, and even better than a high quality knife of the same age, when both hadn't been properly maintained.
In the spirit of this thread, take a typing class. I find that taking classes are an effective way to get over motivation blocks, if that's what is preventing you from learning touch typing.
I'm a math undergrad, and I definitely spend more time in the second sort of style. I find that my intuition is rather reliable, so maybe that's why I'm so successful at math. This might be hitting into the "two cultures of mathematics", where I am definitely on the theory builder/algebraist side. I study category theory and other abstract nonsense, and I am rather bad (relative to my peers) at Putnam style problems.
The difference is that saying there is a territory is also a model. The way I would rephrase map/territory into this language is "the model is not the data."
This is the best place to apply effort for my goals, because I think that there might be some problems underlying MIRI's epistemology and philosophy of math that is causing confusion in some of their papers.
That it hasn't been radically triumphant isn't strong evidence towards its lack of world-beating potential though. Pragmatism is weird and confusing, perhaps it just hasn't been exposited or argued for clearly and convincingly enough. Perhaps it historically has been rejected for cultural reasons ("we're doing physicalism so nyah"). I think there is value on clearly presenting it to the LW/MIRI crowd. There are unresolved problems with a naturalistic philosophy that should be pointed out, and it seems that pragmatism solves them.
As for originalit...
The computable algorithm isn't a meta-model though. It's just you in a different substrate. It's not something the agent can run to figure out what to do because it necessarily take more computing power. And there is nothing preventing such a pragmatic agent from having a universe-model that is computable, considering finding a computable algorithm approximating itself, and copying that algorithm over and over.
Intervals and ratios are going to be essentially the same thing for conventional pomodoros. They are some time on, some time off, repeat. It might be weird to have variable pomodoros since the break is for mental fatigue, not reward. Perhaps some mechanism to reward you with an M&M at some time randomly in the second half of your pomodoros?
The most charitable take on it that I can form is a similar one to Scott's on MBTI: (http://slatestarcodex.com/2014/05/27/on-types-of-typologies/). It might not be validated by science, but it provides a description language with a high amount of granularity over something that most people don't have a good description language for. So with this interpretation, it is more of a theory in the social sciences sense, a lens at which to look at human motivation, behaviour, etc. This probably differs from, and is a much weaker claim than people at Leverage would...
I'd say Nick Bostrom (a respected professor at Oxford) writing Superintelligence (and otherwise working on the project), this (https://twitter.com/elonmusk/status/495759307346952192), some high profile research associates and workshop attendees (Max Tegmark, John Baez, quite a number of Google engineers), give FAI much more legitimacy than connection theory.
If you want a more precise date for whatever reason, it was right at the end of the July 2013 workshop, which was July 19-23. There were a number of leverage folk who had just started the experiment there.
I'm currently interning at MIRI, I had a short technical conversation with Eliezer, a multi hour conversation with Michael Vassar, and other people seem to be taking me as somewhat of an authority on AI topics.
I agree. I want to comment on some of the downvoted posts, but I don't want to pay the karma
Irrationality Game:
Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.
Bayes is epistemological background not a toolbox of algorithms.
I disagree: I think you are lumping two things together that don't necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I'd say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used "right". It's nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.
Depending on the IQ test, I don't think your overall score will go down much if you don't do well on a subsection or two. This is low confidence, and based off one data point though. I have scores ranging from 102 to 136 and my total score somehow comes out to be 141.
That only means you are merely good at arithmetic. Can you prove, say, that there are no perfect squares of the form
3^p + 19(p-1)
where p is prime?
The spaceship "exists" (I don't really like using exists in this context because it is confusing) in the sense that in the futures where someone figures out how to break the speed of light, I know I can interact with the spaceship. What is the probability that I can break the speed of light in the future?
Then for Many Worlds, what is the probability that I will be able to interact with one of the Other Worlds?
I would not care more about things if I gain information that I can influence them, unless I also gain information that they can influence me. If I gain credence in Many Worlds, then I only care about Other Worlds to the extent that it might be more likely for them to influence my world.
I disagree with "common sense." In my experience, when questioning people about what they mean by common sense, I find that they usually mean "general principles that seem like obviously correct to me." And that doesn't even guarantee that they are correct.
I've got Categories for the Working Mathematician by Mac Lane; I will be going through this because I will be giving some talks on category theory to the math club here at my university. I pretty much don't have any logic and I want logic. I have Enderton's A Mathematical introduction to logic which is ok, though I think I want to find a new book. I also have Probability: The Logic of Science that I want to work through. I also want to go through MIRI papers. I am a math undergrad.
I would like to be a part of a study pair or a study group. There seems to b...
+1 because of the first point. Right now we are using this catch-all Reddit style "discussion" forum to encompass absolutely everything and it is a mess.
How about 3^...(3^^^3 up arrows)...^3?
You might want to make the habit a bit shorter than that so that it is easier to practice and repeat a lot.
This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"
Even if they aren't very smart, it is better to frame them as someone who isn't very smart rather than a directly derogatory term "idiot."
"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"
-- Cat Lavigne at the July 2013 CFAR workshop
If idiots do exist, and you have reason to conclude that someone is an idiot, then you shouldn't deny that conclusion -- at least when you subscribe to an epistemic primacy: that forming true beliefs takes precedence over other priorities.
The quote is suspiciously close to being a specific application of "Don't like reality? Pretend it's different!"
Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.
Okay, that's reasonable. But can we talk about the content post itself? I don't think that this really is the most important part of the post and that the top comment should be about it.
I prefer your style (rather, I really dislike Eliezer's style). Possible data points: I read a lot of math: math blogs, math texts, math papers, and I have poor reading comprehension and reading speed. I don't have a particularly short or long attention span, and I don't really read much science or philosophy. I didn't get a whole lot of epiphanies from the sequences, though it did have a strong influence on how I think (ie. my updates weren't felt as epiphanies).
I like the structure of your writing. I like to build my mental categories from the top down, ...
In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?
Why can't we implement subreddits here? Seems like it would be super useful, for this and for other problems like the fact that philosophy, AGI, life extension/transhumanism and rationality all get mixed into the same discussions section.
Have you looked at rhodiola and L-theanine? They tend to counter some of the negative effects of more intense nootropics.
I am mostly talking about epistemic rationality, not instrumental rationality. With that in mind, I wouldn't consider anyone from a hundred years ago or earlier to be up to my epistemic standards because they simply did not have access to the requisite information, ie. cognitive science and Bayesian epistemology. There are people that figured it out in certain domains (like figuring out that the labels in your mind are not the actual things that they represent), but those people are very exceptional and I doubt that I will meet people that are capable of t...
Pretty much someone who has read the Lesswrong sequences. Otherwise, someone who is unusually well read in the right places (cognitive science, especially biases; books like Good and Real and Causality), and demonstrates that they have actually internalized those ideas and their implications.
This might be a more enjoyable test (warning, game and time sink): http://armorgames.com/play/6061/light-bot-20
To be honest, unless they have exceptional mathematical ability or are already rationalists, I will consider them to be mooks. Of course, I wont make that apparent, it is rather hard to make friends that way. Acknowledging that you are smart is a very negative signal, so I try to be humble, which can be awkward in situations like when only two out of 13 people pass a math course that you are in, and you got an A- and the other guy got a C-.
And by the way, rationality, not rationalism.
Tutorials/texts that I know of are Software Foundations, Andrej Bauer's tutorial, and this Hott-Coq tutorial. It looks like installing the HoTT library is a huge pain in the arse though so I think I'll stick with vanilla Coq until either I get one of my CS friend to install it for me, or they make a more user friendly install.
Edit: also this
Why Haskell and not Coq or Agda? That's where all the HoTT stuff is being done anyways.
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitiv
"I need access to the restricted section, I don't want another one of my friends to die"
I would suspect that an argument along those lines would be much more likely to succeed if Quirrell hadn't given his instructions.
I have read around and I still can't really tell what Westergaardian theory is. I can see how harmony fails as a framework (it doesn't work very well for a lot of music I have tried to analyze) so I think there is a good chance that Westergaard is (more) right. However, other than the fact that there are these things called lines, and that there exist rules (I have not actually found a list or description of such rules) for manipulating them. I am not sure how this is different from counterpoint. I don't want to go and read a textbook to figure this out, I would rather read ~5-10 pages of exposition and big-picture
Just by telling everyone to keep Harry away from it improves the security
In that link, is that the 3 dimensional analog of living on a 2D plane with a hole in it, and when you enter the hole, you flip to the other side of the plane? (Or, take a torus, cut along the circle farthest from the center, and extend the new edges out to infinity?)
And mentioned numerous times.
Nitpick: I would consider the Weierstrass function a different sort of pathology than non-standard models or Banach-Tarski - a practical pathology rather than a conceptual pathology. The Weierstrass function is just a fractal. It never smooths out no matter how much you zoom in.
I think any correct use of "need" is either implicitly or explicitly a phrase of the form "I need X (in order to do Y)".
Why does he think of beefing up the restricted section's security only after his conversation with Harry? What did he learn?
I also don't see bringing Harry's parents to Hogwarts as being terribly predictable.
There is no way Harry would get expelled. He is at Hogwarts for his protection - to be close to Dumbledore - not so that he can go to school.
Burning cats is another good example. Can you feel how much fun it is to burn cats? Some people used to have all sorts of fun by burning cats. And this one is harder to do the wrong sort of justification based on bad models than either burning witches or torturing heretics.
Edit: Well, just scrolled down to where you talk about torturing animals. Beat me to it I guess...