Hi, I'm the organizer. If you're in São Paulo or nearby, please show up! We'll have an introduction to rationality for newcomers, and talk about Systems 1 and 2, Units of Exchange and Goal Factoring.
You can get more details on the Meetup.com event https://www.meetup.com/pt-BR/Racionalidade-em-Sao-Paulo/events/253667078/ or the Facebook event https://www.facebook.com/events/255536928394025/
I follow several programming newsletters, and I don't have context to fully understand and appreciate most links they share (although I usually have a general idea about what they are talking about). It's still very valuable to me find out about new stuff in the field.
I'd patreon a few dollars for something like this.
Check the link below, v0.2. Should be working now!
https://www.dropbox.com/s/59redws46ncdiax/predict_v0.2.apk?dl=0
Thanks!
I'm not sure I get what kind of roulette you mean... something like a ring pie chart?
I thought of using a target, but I'm not sure if that would be much more effective than the sliding bar.
The way I see it, having intuitions and trusting them is not necessarily harmful. But you should actually recognize them by what they are: snap judgements made by subconscious heuristics that have little to do with actual arguments you come up with. That way, you can take it as a kind of evidence/argument, instead of a Bottom Line - like an opinion from a supposed expert which tells you the "X is Y", but doesn't have the time to explain. You can then ask: "is this guy really an expert?" and "do other arguments/evidence outweight the expert's opinion?"
Brain dump of a quick idea:
A sufficiently complex bridge law might say that the agent is actually a rock which, through some bizarre arbitrary encoding, encodes a computation[1]. Meanwhile the actual agent is somewhere else. Hopefully the agent has some adequate Occamian prior and he never assigns this hypothesis any relevance because of the high complexity of the encoding code.
In idea-space, though, there is a computation which is encoded by a rock using a complex arbitrary encoding, which, by virtue of having a weird prior, concludes that it actually is ...
The ULH suggests that most everything that defines the human mind is cognitive software rather than hardware: the adult mind (in terms of algorithmic information) is 99.999% a cultural/memetic construct.
I think a distinction worth tracing here is the diferrence between "learning" in the neural-net-sense and "learning" in the human pedagogical/psychological sense.
The "learning" done by a piece of cortex becoming a visual cortex after receiving neural impulses from the eye isn't something you can override by teaching a person...
If you keep the project open source, I might be able help with the programming (although I don't know much about Rails, I could help with the client side). The math is a mystery to me, too, but can't you charge ahead with a simple geometric mean for the combination of estimates while you figure it out?
Hi, and thanks for the awesome job! Will you keep a public record of changes you make to the book? I'm coordinating a translation effort, and that would be important to keep it in sync if you change the actual text, not just fix spelling and hyperlinking errors.
Edit: Our translation effort is for Portuguese only, and can be found at http://racionalidade.com.br/wiki .
He specifically said he's talking about "homo economicus"-"rational"-like decision. An agent like that should have no need to punish itself - by having a negative emotion - since the potential loss of utility itself is a compelling reason to take action beforehand. So self-punishing is out. How do you think sadness would serve as a signalling device, in this case?
Although I think your point here is plausible, I don't think it fits in a post where you are talking about the logicalness of morality. This qualia problem is physical; whether your feeling changes when the structure of some part of your decision system changes depends on your implementation.
Maybe your background understanding of neurology is enough for you to be somewhat confident stating this feeling/logical-function relation for humans. But mine is not and, although I could separate your metaethical explanations from your physical claims when reading the post, I think it would be better off without the latter.
Great post as usual.
It brings to mind and fits in with some thoughts I have on simulations. Why isn't this two-layered system you described analogous to the relation between a simulated universe and its simulator? I mean: the simulator sees and, therefore, is affected by whatever happens in the simulation. But the simulation, if it is just the computation of a mathematical structure, cannot be affected by the simulator: indeed, if I, simulator, were to change the value of some bits during the simulation, the results I would see wouldn't be the results of t...
Well, you really wouldn't be able to remember qualia, but you'd be able to recall brain states that evoke the same qualia as the original events they recorded. In that sense, "to remember" means your brain enters states that are in some way similar to those of the moments of experience (and, in a world where qualia exist, these remembering-brain-states evoke qualia accordingly). So, although I still agree with other arguments agains epiphenomenalism, I don't think this one refutes it.
I think you've taken EY's question too literally. The real question is about the status of statements and facts of formal systems ("systems of rules for symbol manipulation") in general, not arithmetic, specifically. If you define "mathematics" to include all formal systems, then you can say EY's meditation is about mathematics.
...And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular I
Man, even if you don't think so, you probably do have something to add to the group. Even if you don't have a lot of scientific/philosophical knowledge (I myself felt a little like this talking to the other guys, and i see that as a learning opportunity), you can add just by being a different person, with different experiences and background. Please show up if you can, even if you arrive late!
A more charitable interpretation is that they are trying to assume less, going a little more meta and explaining the general problem, instead of focusing on specifics that they thing are important, but might not really be.
A failure mode when people don't try to do this is the user that asks a software developer to "just add a button that allows me to autofill this form", when maybe there's an automation that renders the form totally unnecessary.