Shirobako: The most realistic portrayal of ordinary working life (at least, like mine) I've seen, in any fictional medium. Warm-hearted (perhaps to a fault), straightforward, very much writing what they know and love. I recommend it to anyone interested in animation, but especially to students or similar interested in seeing a day in the working life.
Hibike Euphonium. Teenage drama (not actually a romance show, but it felt like one) that again felt very true, and with KyoAni's usual high production values.
A Farewell to Arms (one of the Short Peace shorts)....
Discourage/ban Open threads. They are an unusual thing to have on a an open forum. They might have made sense when posting volume was higher, but right now they further obfuscate valuable content.
I'd say the opposite: the open threads are the part that's working. So I'd rather remove main/discussion and make everything into open thread, i.e. move to something more like a traditional forum model. I don't know whether that's functionally the same thing.
In other words, gaining $1M has to be no more than about 25% better than gaining $1k.
Interesting. My thought process was that it's worth losing $8000 in EV to avoid a 1% chance of losing $1000. I think my original statement was true, but perhaps poorly calibrated; these days I shouldn't be that risk-averse.
Your returns must be very rapidly diminishing. If u is your kilobucks-to-utilons function then you need [7920u(1001)+80u(1)]/8000 > [3996u(1000)+4u(0)]/4000, or more simply 990u(1001)+10u(1) > 999u(1000)+u(0). If, e.g., u(x) = log(1+x) (a plausible rate of decrease, assuming your initial net worth is close to zero) then what you need is 6847.6 > 6901.8, which doesn't hold. Even if u(x) = log(1+log(1+x)) the condition doesn't hold.
If we fix our origin by saying that u(0)=0 (i.e., we're looking at utility change as a result of the transaction) and s...
AIG was the borrower (and separately Fannie and Freddie), banks were the lenders, it is absolutely useful to think about the situation in those terms. It highlights the conflict between our political intuition that insurance should be protected and financial speculation should not - some people thought AIG was doing one, some people thought the other. Likewise some people thought Freddie and Fannie were widows-and-orphans investments that the government should guarantee and some people thought they were private financial traders. Clarifying these things could have averted the crisis, it's absolutely a useful model.
Light, by M John Harrison (based on the first 22%). I'm finding it genuinely hard to read - a bit like The Quantum Thief or The January Dancer, but more so than either of them. I can't yet say it's good per se - in particular the three narrative strands show very little sign of converging at this stake - but it's a striking, provocative experience.
No. There are any number of predictable systems in our quantum universe, and no reason to believe that an agent need be anything other than e.g. a computer program. In any case "noise" is the wrong way to think about QM; quantum behaviour is precisely predictable, it's just the subjective Born probabilities that apply.
I think there's an analogy with "purchase fuzzies and utilons separately" here that Levine misses. If you want to be trendy and have a bunch of investment return in the future, it's probably more efficient to buy those two things from separate sources than to try and get both with a single product.
You seem a very enthusiastic participant here, despite a lot of downmodding. I admire that - on here. In real life my fear would be that that translated into clinginess - wanting to come to all my parties, wanting to talk forever, and the like. (And perhaps that it reflects being socially unpopular, and that there might be a reason for that). So I'd lean slightly to avoid.
The whole point of acausal trading is that it doesn't require any causal link. I don't think there's any rule that says it's inherently hard to model people a long way away.
Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.
Not quite - rather the everyday usage of "real" refers to the model with the currently-best predictive ability. http://lesswrong.com/lw/on/reductionism/ - we would all say "the aeroplane wings are real".
Does an amoeba want anything? Does a fly? A dog? A human?
You're right, of course, that we have better models for a calculator than as an agent. But that's only because we understand calculators and they have a very limited range of behaviour. As a program gets more complex and creative it becomes more predictive to think of it as wanting things (or rather, the alternative models become less predictive).
A program designed to answer a question necessarily wants to answer that question. A superintelligent program trying to answer that particular question runs the risk of acting as a paperclip maximizer.
Suppose you build a superintelligent program that is designed to make precise predictions, by being more creative and better at predictions than any human would. Why are you confident that one of the creative things this program does to make itself better at predictions isn't turning the matter of the Earth into computronium as step 1?
It's about where I expected. I think 6 is probably the best you can do under ideal circumstances. Legitimate, focussed work is exhausting.
If you're looking for bias, this is a community where people who are less productive probably prefer to think of themselves as intelligent and akrasikal (sp?). Also you've asked at the end of a long holiday for any students here.
I'd rather people actually said "Do you want to come back to my room for sex?" rather than "Do you want to come back to my room for coffee?" where coffee is a euphemism for sex, because some people will take coffee at face value, which can lead to either uncomfortable situations, including fear of assault, or lead to people missing opportunities because they are bad at reading between the lines.
I'd rather that too, and I've had it go wrong in both directions. But the whole point of much of this site is that outcomes are more importan...
Thank you for publishing. Before this I think the best public argument from the AI side was Khoth's, which was... not very convincing, although it apparently won once.
I still don't believe the result. But I'll accept (unlike with nonpublic iterations) that it seems to be a real one, and that I am confused.
I didn't like Ancillary Justice so much FWIW - I didn't find the culture so compelling, and the lead's morality was jarring to me (she seemed less like someone who was seeing the flaws in the culture she was raised in and more like someone who had always instinctively had a western liberal morality that they'd been suppressing to fit in).
Do you have a view on The January Dancer? I loved that - modern space opera, with some interesting cultures, but also a compelling plot on the sci-fi side.
Another tranche of shows watched with my group, though they don't really end up as recommendations:
Blood Blockade Battlefront: Started with some fun action, and a very cool-looking setting, but decayed rapidly - the plot arc it tried to set up towards the end was just dull. Avoid
Knights of Sidonia (season 2): Shifts much more towards the harem antics than the serious sci-fi; also some massive power inflation which could easily have been thematic but... isn't. I greatly enjoyed it, but only recommended to people who enjoy light comedy/romance.
Fate/Stay Nigh...
Looks like their website has been taken over by spam. Which in turn gives me very little confidence in an organization that's supposed to be around until my death and for many years afterwards.
Do you know anything about the current state of play in the UK? Are you still covered?