This is amazingly great (I laughed out loud at the "Biceps-controlled socialism" graph), but I feel it only works because the original study authors made the rookie mistake of publishing their data set. The only time I have wanted to try something similar (for the brain mosaic paper), I hoped it would be possible to extract the data from the diagram, but no, the jpg in the pdf is sufficiently low-resolution that it doesn't work.
Ok, so we should identify criminals with "thoughts of committing deadly violence, regardless of action", and then "many of these offenders should probably never be released from confinement". A literal thought crime.
Yes, there will always be some off-by-one errors, so the best we can hope for is to pick the convention that creates less of them. That said, the fact that most programming languages choose the zero-based convention seems to suggest that that's the best one.
There's also the revealed word of our prophet Dijkstra: EWD83 - Why numbering should start at zero.
Yeah.
I think the orthodox MIRI position is not that logical proofs are necessary, or even the most efficient way, to make a super-intelligence. It's that humans need formal proofs to be sure that the AI will be well-behaved. A random kludgy program might be much smarter than your carefully proven one, but that's cold comfort if it then proceeds to kill you.
I mean, you can literally build an EmDrive yourself, but you definitely can't measure the tiny thrust yourself. You still need to trust the experts there, no?
Apart from the question about whether it produces any thrust, there is also the question of whether it will lead to any interesting scientific discoveries. For example, if it turns out that there was a bit of contaminating material that evaporated, the thrust is real but the space-faring implications are not...
Eh, elections seem hard to update on though. Before the election, I thought Clinton was 70% likely to win or so, because that's what Nate Silver said. Then Trump won. Was I wrong? Maybe, but it's not statistically significant at even p = 0.05.
So just looking at U.S. presidential elections, you'll never have enough data to see if you're calibrated or not. I guess you can seriously geek out on politics, and follow and make predictions for lots of local and foreign elections also. At that point, it's a serious hobby though, I'm much more of a casual.
any suggestions?
It sounds pretty spectactular!
I found one paper about comets crashing into the sun, but unfortunately they don't consider as big comets as you do--the largest one is a "Hale-Bopp sized" one, which they take to be 10^15 kg (which already seems a little low, Wikipedia suggests 10^16 kg.)
I guess the biggest uncertainty is how common so big comets are (so, how often should we expect to see one crash into the sun). In particular, I think the known sun-grazing comets are much smaller than the big comet you consider.
Also, I wonder a bit about your 1 se...
See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That's why T can be primitive recursive.
The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That's not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that's not quite the thing that's usually meant by a universal machine.
I think the fact that you just need one loop is interesting, but it doesn't go as far as you claim; if an angel gives you a program, you still don't know how many steps to run it for, so you still need that one unbounded loop.
I'm not sure what you have in mind for treatment of risk in finance. People will be concerned about risk in the sense that they compute a probablility distribution of the possible future outcomes of their portfolio, and try to optimize it to limit possible losses. Some institutional actors, like banks, have to compute a "value at risk" measure (the loss of value in the portfolio in the bottom 5th percentile), and have to put up a collateral based on that.
But those are all things that happen before a utility computation, they are all consistent wi...
It is very standard in economics, game theory, etc, to model risk aversion as a concave utility function. If you want some motivation for why, then e.g. the Von Neumann–Morgenstern utility theorem shows that a suitably idealized agent will maximize utility. But in general, the proof is in the pudding: the theory works in many practical cases.
Of course, if you want to study exactly how humans make decisions, then at some point this will break down. E.g. the decision process predicted by Prospect Theory is different from maximizing utility. So in general, th...
She eventually gives him the carrot pen so he can delete the recording, no?
I took the survey!
I write down one line (about 80 characters) about what things I did each day. Originally I intended to write down "accomplishments" in order to incentivise myself into being more accomplished, but it has since morphed into also being a record of notable things that happened, and a lot of free-form whining over how bad certain days are. It's kindof nice to be able to go back and figure out when exactly something in the past happens, or generally reminisce about what was going on some years ago.
He is a historian, studying history of science. That subject is exactly about studying what people (scientists) are saying.
I think Shane Legg's universal intelligence itself involves Kolmogorov complexity, so it's not computable and will not work here. (Also, it involves a function V, encoding the our values; if human values are irreducibly complex, that should add a bunch of bits.)
In general, I think this approach seems too good to be true? An intelligent agent is one which preforms well in the environment. But don't the "no free lunch" theorems show that you need to know what the environment is like in order to do that? Intuitively, that's what should cause the Kolmogorov complexity to go up.
For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend.
I figured this was an absurd caricature, but then this thing floated by on tumblr:
...So when arguing against objectivity, they said, don’t make the post-modern mistake of saying there is no truth, but rather that there are infinite
Realistic kissing simulator to get over the fear of kissing
Ok, this is pretty amazing.
I guess because people want to live in the existing cities? It's not like there is nowhere to live in California--looking at some online apartment listings you can rent a 2 bedroom apt in Bakersfield CA for $700/month. But people still prefer to move to San Francisco and pay $5000/month.
In animal training it is said that best way to get rid of an undesired behaviour is to train the animal with an incompatible behaviour. For example if you have a problem with your dog chasing cats, train it to sit whenever it sees a cat -- it can't sit and chase at the same time. Googling "incompatible behavior" or "Differential Reinforcement of an Incompatible Behavior" yields lots of discussion.
The book Don't Shoot the Dog talks a lot about this, and suggests that the same should be true for people. (This is a very Less Wrong-style bo...
Nitpick: it would be better to write "also a theorem of epistemic logic", since there are other modal logics where it is not provable. (E.g. just modal logic K).
I guess your theory is the same as what Alice Maz writes in the linked post. But I'm not at all convinced that that's a correct analysis of what Piper Harron is writing about. In the comments to Harron's post there are some more concrete examples of what she is talking about, which do indeed sound a bit like one-upping. I only know a couple of mathematicians, but from what I hear there are indeed lots of the social games even in math---it's not a pure preserve where only facts matter.
(And in general, I feel Maz' post seems a bit too saccharine, in so far a...
What are previous examples of people on LW applying mental techniques and getting into seriously harmful states?
Source: been making my own jam for years, had plenty of time to experiment.
So did you actually make jam without sugar and then stored it for years before eating it?
In the story the superhappies propose to self-modify to appreciate complex art, not just simple porn, and they say that humans and babyeaters will both think that is an improvement. So to some degree the superhappies (with their very ugly spaceships) are repulsive to humans, although not as strongly repulsive as the babyeaters.
they are moral and wouldn't offer a deal unless it was beneficial according to both utility functions being merged (not just according to their value of happiness).
I guess whether it is beneficial or not depends on what you compare to? They say,
he obvious starting point upon which to build further negotiations, is to combine and compromise the utility functions of the three species until we mutually satisfice, providing compensation for all changes demanded.
So they are aiming for satisficing rather than maximizing utility: according to all three bef...
Sure, I think that was annoying. But it's not the stated reason for the ban.
Also, "monogamy versus hypergamy" has been discussed on Less Wrong since the dawn of time. See e.g. this post and discussion in comments, from 2009. Deciding now that this topic is impermissible crimethink seems like a pretty drastic narrowing of allowed thoughts.
In my opinion, the problem wasn't the topic per se, but how the author approached it:
comments in every Open Thread on the same topic, zero visible learning.
I... what? As I understand the comment, he wanted to ban sex outside marriage. Describing that as "women should be distributed to men they don't want sex with" seems ridiculously exaggerated.
I agree that his one-issue thing was tiresome, and perhaps there is some argument for making "being boring and often off-topic" a bannable offense in itself. But this moderation action seems poorly thought through.
Edit: digging through his comment history finds this comment, where he writes it would be better to marry daughters off as young virgins....
The ending is a bit rushed. Here's hoping the sequel is good, it just arrived in the mail.
I thought the sequel was more boring. The structure of the books doesn't really work very well as a series, I feel. The things that I found most appealing about Justice were the new kind of narrator (in the flashbacks, when the same events are described from multiple viewpoints of the same character), and the gradual puzzle of figuring out how the universe works. But at the end of Justice that's all over, there is just a single ancillary left, and the whodunnit-mystery has been explained. So then Sword is a lot less novel, just another space opera...
I feel this only raises more questions. :)
The description of the use of posture in aikido is super interesting!
I'm a little worried that analogizing "mental arts" to martial arts might lead the imagination in the wrong direction--it evokes ideas like "flexible" or "balanced" etc. But thinking about mental states when I get a lot of research done, the biggest one by far is when I'm trying to prove some annoying guy wrong on an inconsequential comment thread on tumblr. If I could only harness that motivation, I'd be set for life. Thinking about aikido practitioners primes me for things like "zen-like and serene", not "peeved and petty".
Upvoted, but mostly for the first paragraph and photo. :)
Just calling the problem undecidable doesn't actually solve anything. If you can prove it's undecidable, it creates the same paradox. If no Turing machine can know whether or not a program halts, and we are also Turing machines, then we can't know either.
I guess the answer to this point is that when constructing the proof that H(FL, FL) loops forever, we assume that H can't be wrong. So we are working in an extended set of axioms: the program enumerates proofs given some set of axioms T, and the English-language proof in the tumblr post uses the axiom s...
The voicing thing is known as rendaku. Generally it's a bit of a mystery when it will and will not happen. This thesis lists a bunch of proposed rules, two of which seem relevant:
Rendaku is favoured if the compound words are native-Japanese (yamatokotoba). This might be the reason for kozukai vs mahoutsukai, ko is native-Japanese and mahou is sino-Japanese. So by analogy, one would not expect voicing for beizutsukai.
Noun+Verb compounds exhibit rendaku if the noun is an "adverbial modifier" but none if it's a direct object. In "using magi
I would imagine that using foxes give you a lot more to work with though. Foxes in nature live in pairs or small groups. The children stay around the parent for a long time. So they already have mechanisms in place for social behaviours. (And even if they are not expressed, there probably are some latent possibilities shared among mammals? E.g. this article about the evolution of housecats notes that they independently evolved a lot of the same behaviours that lion prides use to socialise, even though wildcats are solitary.)
How about amortizing it among LessWrong users? If there are enough interested people we can pool up to buy a pair, each one in the pool gets to keep it for (say) a month, and then mails it in an envelope to the next guy. Maybe everyone has to write an experience report as a Less Wrong comment, too.
Indians call sterile mosquitos CIA agents (Washington Post, December 10, 1974).
"The history of genetic control trials against culicine mosquitoes in India in the mid-1970s shows how opposition can have far-reaching consequences. After several years of work on field testing of the mating competitiveness of sterile male mosquitoes, accusations that the project was meant to obtain data for biologic warfare using yellow fever were launched in the press and taken up by opposition politicians. Shortly afterward, a well-prepared attempt to eradicate an urban...
My impression is that chemical weapons were very effective in the Iran-Iraq war (e.g.), despite the gas mask having been invented.
Coming up: the post is promoted to Main; it is re-released as a MIRI whitepaper; Nick Bostrom publishes a book-length analysis; The New Yorker features a meandering article illustrated by a tasteful watercolor showing a trolly attacked by a Terminator.
As I understood it, the reaction mass for Orion comes from the chemical explosives used to implode the bomb. (The bomb design would be quite unusual, with several tons of explosives acting on a very small amount of plutonium).
I can see what you're getting at, but I don't think the rationality content here is enough to justify importing a notoriously divisive topic.
Yeah, the Barbie book seems kind of unfortunate. On the other hand, lambdaphagy wrote an also hillarious/depressing post about the criticism of the book: women writing about their experiences in IT is very problematic.
So in the case of this particular paper, some other researchers did ask for the raw data, and they got it and carried out exactly the analysis I was interested in knowing about. So I guess it's a happy ending, except I didn't get to write a tumblr post back when there was a lot of buzz in the media about it. :)