All of Gust's Comments + Replies

A more charitable interpretation is that they are trying to assume less, going a little more meta and explaining the general problem, instead of focusing on specifics that they thing are important, but might not really be.

A failure mode when people don't try to do this is the user that asks a software developer to "just add a button that allows me to autofill this form", when maybe there's an automation that renders the form totally unnecessary.

Hi, I'm the organizer. If you're in São Paulo or nearby, please show up! We'll have an introduction to rationality for newcomers, and talk about Systems 1 and 2, Units of Exchange and Goal Factoring.

You can get more details on the Meetup.com event https://www.meetup.com/pt-BR/Racionalidade-em-Sao-Paulo/events/253667078/ or the Facebook event https://www.facebook.com/events/255536928394025/

There's this http://miegakure.com/ in development for several years now

What do you mean, in this context?

0IlyaShpitser
"Turning knobs" in a model is how people think about cause and effect formally.

I follow several programming newsletters, and I don't have context to fully understand and appreciate most links they share (although I usually have a general idea about what they are talking about). It's still very valuable to me find out about new stuff in the field.

I'd patreon a few dollars for something like this.

0[anonymous]
Thanks for bringing this up again. I think I'll try a weekly roundup next week and see how it goes (along with short summaries).
-2Gleb_Tsipursky
The first set of rationality-themed merchandise is ready! Thanks for your suggestions :-)

Thanks! I'll look into adding tags and timeframes. I'm not sure how to do that without the layout getting too crowded.

0Bobertron
Great! Works so far.

You mean, instead of programming an AI in a real life computer and showing it a "Game of Life" table to optimize, you could build a turing machine inside a Game of Life table, program the AI inside this machine, and let it optimize the table in which it is? Makes sense.

This is weird. I'll test to see if I can reproduce and report back (hopefully with a fix).

Thanks!

  • I'm not getting the flickering here... are you on a low-end device? Which version of android are you on?
  • No difference at all. I just thought it would make sense to phrase the predictions in the form of questions and answers - so you could e.g. pick a question from a pre-made list and just choose your answer.
  • Good to know, I thought "long press to edit" was a common enough pattern that everybody would discover it.
0dutchie
It's a Moto X (2nd gen) with Android 5.1, so not particularly low-end. It is most obvious if I slide only just fast enough for the inertia to take it between tabs, then it appears to get confused as to which one is highlighted at the crossover point.

I'm not sure I get what kind of roulette you mean... something like a ring pie chart?

I thought of using a target, but I'm not sure if that would be much more effective than the sliding bar.

0[anonymous]
Yes, the ring pie chart.

The way I see it, having intuitions and trusting them is not necessarily harmful. But you should actually recognize them by what they are: snap judgements made by subconscious heuristics that have little to do with actual arguments you come up with. That way, you can take it as a kind of evidence/argument, instead of a Bottom Line - like an opinion from a supposed expert which tells you the "X is Y", but doesn't have the time to explain. You can then ask: "is this guy really an expert?" and "do other arguments/evidence outweight the expert's opinion?"

6tailcalled
Note that both for experts and for your intuition, you should consider that you might end up double-counting the evidence if you treat them as independent of the evidence you have found - if everybody is doing everything correctly (which very rarily happens), you, your intuition and the experts should all know the same arguments, and naive thinking might double/triple-count the arguments.

I'm sad the original FB posts were deleted. Now I can never show my kids the occasion where Eliezer endorsed a comment of mine =(

Brain dump of a quick idea:

A sufficiently complex bridge law might say that the agent is actually a rock which, through some bizarre arbitrary encoding, encodes a computation[1]. Meanwhile the actual agent is somewhere else. Hopefully the agent has some adequate Occamian prior and he never assigns this hypothesis any relevance because of the high complexity of the encoding code.

In idea-space, though, there is a computation which is encoded by a rock using a complex arbitrary encoding, which, by virtue of having a weird prior, concludes that it actually is ... (read more)

The ULH suggests that most everything that defines the human mind is cognitive software rather than hardware: the adult mind (in terms of algorithmic information) is 99.999% a cultural/memetic construct.

I think a distinction worth tracing here is the diferrence between "learning" in the neural-net-sense and "learning" in the human pedagogical/psychological sense.

The "learning" done by a piece of cortex becoming a visual cortex after receiving neural impulses from the eye isn't something you can override by teaching a person... (read more)

1jacob_cannell
This is a good point Gust and I agree that there is a distinction at the high level in terms of the types of concepts that are learned, the complexity of the concepts, and the structures involved - even though the same high level learning algorithms and systems are much the same. Well all learning involves brain rewiring - that's just how the brain works at the low level. And you can actually override the neural impulses from the eye and cause them to learn new things - learning to read is one simple example, another more complex example is the reversed vision goggle experiments that MIT did so long ago - humans can learn to see upside down after - I believe a week or so of visual experience with the goggles on. I agree that learning complex linguistic concepts requires learning over more moving parts in the brain - the cortical regions that specialize in language along with the BG, working memory in the PFC, various other cortical regions that actually model the concepts and mental algorithms represented by the linguistic symbols, memory recall operations in the hippocampus, etc etc. So yes learning cultural/memetic concepts is more complex and perhaps qualitatively different. Yeah I probably should have said 99.999% environmental construct.

Well, you'd have to hardcode at least a learning algorithm for values if you expect to have any real chance that the AI behaves like a useful agent, and that falls within the category of important functionalities. But then I guess you'll agree with that.

0hairyfigment
Don't feed the troll. "Not hardcoding values or ethics" is the idea behind CEV, which seems frequently "explored round here." Though I admit I do see some bizarre misunderstandings.

You have to hardcode something, don't you?

-2TheAncientGeek
I meant not hardcoding values or ethics.

You're a Brazilian studying Law who's been around LW since 2013 and I'd never heard of you? Wow. Please show up!

If you keep the project open source, I might be able help with the programming (although I don't know much about Rails, I could help with the client side). The math is a mystery to me, too, but can't you charge ahead with a simple geometric mean for the combination of estimates while you figure it out?

We're translating to Brazilian Protuguese only, since that's our native language.

Hi, and thanks for the awesome job! Will you keep a public record of changes you make to the book? I'm coordinating a translation effort, and that would be important to keep it in sync if you change the actual text, not just fix spelling and hyperlinking errors.

Edit: Our translation effort is for Portuguese only, and can be found at http://racionalidade.com.br/wiki .

1hydkyll
How is that translation coming along? I could help with German.
3Rob Bensinger
Yes, we'll keep a public record of content changes, or at least a private record that we'd be happy to share with people doing things like translation projects.

Interesting idea. Brazilian law explicitly admits lottery as a form of settling, but I'm not sure if that example with a penalty for not winning a lawsuit would be admissible.

I guess I misunderstood what you meant by "There are many ways to tackle this question, but I mean this in a homo economicus, not biased perspective." then. See my reply to ShardPhoenix.

0diegocaleiro
Oh, yes, you did (but this is always the writer responsibility, so it is my fault (Gilbert 2012)) I am writing a text about what Should happen. Not what does happen. Is-ought problem. I mean't what a rational actor should do, without changing the Is aspect of reality. So the homo economicus was the Should agent. The is agent is still like us.

He specifically said he's talking about "homo economicus"-"rational"-like decision. An agent like that should have no need to punish itself - by having a negative emotion - since the potential loss of utility itself is a compelling reason to take action beforehand. So self-punishing is out. How do you think sadness would serve as a signalling device, in this case?

0ShardPhoenix
This is speculative, but if someone isn't upset about losing an opportunity, one could infer that they never really believed that they had it in the first place - whereas if they're upset, perhaps losing the opportunity was just bad luck.

Not sure what you mean by "you SHOULD be sad when you miss an opportunity1"? What's the advantage of being sad instead of just shrugging and replanning?

1diegocaleiro
I was assuming a non-transhuman world in which the unnecessary connection between sadness and emotional thoughfulness, as well as sadness and system 2 replanning is a reality. Sorry I didn't point it out explicitly.
0ShardPhoenix
I can think of some purposes this sadness might serve - eg signalling or self-punishment (for lack of past efforts) with TDT type considerations for why you wouldn't just skip it.

I've read Kolak's Cognitive Science, which you recomended in that textbook list post. I've enjoyed it a lot and it didn't feel like I needed some previous introductory reading. Any reason why you left it out now?

Awesome project. I really liked the facebook discussion, and this post explains clearly and concretely a part that some people found confusing. Very well written. Congratulation, Robb.

This just feels really promising, although I can't say I've really followed it all (you've lost me a couple posts ago on the math, but that's my fault). I'm waiting eagerly for the re-post.

0[anonymous]
Sorry for the long delay. I'm actually polishing up the next version right at this very moment. Expect something soon.

All the content in the post just fell in place after I read Giles summary. Still a great post, though.

Necessary entities, Moses ben Maimonides Anselm's ontological, Summa Theologica I think these are switched.

1ygert
Also, the lines Are all mixed up.

Although I think your point here is plausible, I don't think it fits in a post where you are talking about the logicalness of morality. This qualia problem is physical; whether your feeling changes when the structure of some part of your decision system changes depends on your implementation.

Maybe your background understanding of neurology is enough for you to be somewhat confident stating this feeling/logical-function relation for humans. But mine is not and, although I could separate your metaethical explanations from your physical claims when reading the post, I think it would be better off without the latter.

I guess you could still build a causal graph if the universe is defined by initial and end states - you'd just have two disconnected nodes at the top. But you'd have to give up the link between causality and what we call "time".

3DanielLC
"But you'd have to give up the link between causality and what we call 'time'." You'd just have to make it slightly weaker. Entropy will still by and large increase in the direction we call "forward in time". So long as entropy is increasing, causality works. I don't think the errors would be enough to notice in any feasible experiment.

Great post as usual.

It brings to mind and fits in with some thoughts I have on simulations. Why isn't this two-layered system you described analogous to the relation between a simulated universe and its simulator? I mean: the simulator sees and, therefore, is affected by whatever happens in the simulation. But the simulation, if it is just the computation of a mathematical structure, cannot be affected by the simulator: indeed, if I, simulator, were to change the value of some bits during the simulation, the results I would see wouldn't be the results of t... (read more)

Well, you really wouldn't be able to remember qualia, but you'd be able to recall brain states that evoke the same qualia as the original events they recorded. In that sense, "to remember" means your brain enters states that are in some way similar to those of the moments of experience (and, in a world where qualia exist, these remembering-brain-states evoke qualia accordingly). So, although I still agree with other arguments agains epiphenomenalism, I don't think this one refutes it.

I don't know if this insight is originally yours or not, but thank you for it. It's like you just gave me a piece of the puzzle I was missing (even if I still don't know where it fits).

I think you've taken EY's question too literally. The real question is about the status of statements and facts of formal systems ("systems of rules for symbol manipulation") in general, not arithmetic, specifically. If you define "mathematics" to include all formal systems, then you can say EY's meditation is about mathematics.

Actually, if you think of it as affecting us, but not being affected by us, it is, in EY's words, mathematics is higher. We would be "shadows" influenced by the higher tier, but unable to affect it.

But I don't really think this line of reasoning leads anywhere.

It's 25% of the Doctors, not of the population of potential victims. If the Doctors at each group take victims at the same frequency and quantity, the number of victims will be the same. Actually, depending on what kind of social impact you think about, maybe the largest group suffers the least.

4A1987dM
I was thinking about the fact that, if doctors are known to target members of $group, all members of $group might feel worried. (I'm using “worry” to mean ‘psychological discomfort’.) Of course ceteris paribus the probability that a given member of $group will be targeted will be inversely proportional to the size of $group; but since humans are biased, I guess the amount of worry an individual will experience won't be directly proportional to the probability of being targeted, so the total amount of worry will increase with the size of $group.

and we can help change them in the interest of rational adaptation

And why should you do that?

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular I

... (read more)

I like the quote, but I don't see how it relates to rationality.

0TimS
There are people in the real world who think that having a good enough decision-making process for making moral decisions (like deciding the right result in litigation) ensures a morally upright decision. Up to this point, decision-making procedures have always been implemented by humans, so the quality of the decision-making process is not enough to ensure that a morally upright decision will be made. The better guarantee of morally upright decision-making is morally upright decision-makers.

Man, even if you don't think so, you probably do have something to add to the group. Even if you don't have a lot of scientific/philosophical knowledge (I myself felt a little like this talking to the other guys, and i see that as a learning opportunity), you can add just by being a different person, with different experiences and background. Please show up if you can, even if you arrive late!

Ethics. Heads up: I'm going to ask you about some stuff about utilitarianism that I don't understand =P

The meetup was great! Diegocaleiro, leo arruda, dyokomizo, anthony and I were there, and I think we had a great time. I hope we can do this again, and that the others will show up in the next one!

Load More