Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth

7 Bound_up 11 August 2017 06:28PM

//The point has already been made, that if you wish to truly be honest, it is not enough to speak the truth.

I generally don't tell people I'm an atheist (I describe my beliefs without using any common labels). Why? I know that if I say the words "I am an atheist," that they will hear the following concepts:

- I positively believe there is no God

- I cannot be persuaded by evidence any more than most believers can be persuaded by evidence, ie, I have a kind of faith in my atheism

- I wish to distance myself from members of religious tribes

As I said, the point has already been made; If I know that they will hear those false ideas when I say a certain phrase, how can I say I am honest in speaking it, knowing that I will cause them to have false beliefs? Hence the saying, if you wish to protect yourself, speak the truth. If you wish to be honest, speak so that truth will be heard.

Many a politician convincingly lies with truths by saying things that they know will be interpreted in a certain positive (and false) way, but which they can always defend as having been intended to convey some other meaning.

---

The New

There is a counterpart to this insight, come to me as I've begun to pay more attention to the flow of implicit social communication. If speaking the truth in a way you know will deceive is a lie, then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.

I've relaxed my standards of truth-telling as I've come to understand this. "You're the best" and "You can do this" statements have been opened to me, no qualifiers needed. If I know that everyone in a group has to say "I have XYZ qualification," but I also know that no one actually believes anybody when they say it, I can comfortably recite those words, knowing that I'm not actually leading anybody to believe false things, and thus, am not being dishonest.

Politicians use this method, too, and I think I'm more or less okay with it. You see, we have a certain problem that arises from intellectual inequality. There are certain truths which literally cannot be spoken to some people. If someone has an IQ of 85, you literally cannot tell them the truth about a great number of things (or they cannot receive it). And there are a great many more people who have the raw potential to understand certain difficult truths, but whom you cannot reasonably tell these truths (they'd have to want to learn, put in effort, receive extensive teaching, etc).

What if some of these truths are pertinent to policy? What do you do, say a bunch of phrases that are "true" in a way you will interpret them, but which will only be heard as...

As what? What do people hear when you explain concepts they cannot understand? If I had to guess, very often they interpret this as an attack on their social standing, as an attempt by the speaker to establish themselves as a figure of superior ability, to whom they should defer. You sound uppity, cold, out-of-touch, maybe nerdy or socially inept.

So, then...if you're socially capable, you don't say those things. You give up. You can't speak the truth, you literally cannot make a great many people hear the real reasons why policy Z is a good idea; they have limited the vocabulary of the dialogue by their ability and willingness to engage. 

Your remaining moves are to limit yourself to their vocabulary, or say something outside of that vocabulary, all the nuance of which will evaporate en route to their ears, and which will be heard as a monochromatic "I think I'm better than you."

The details of this dynamic at play go on and on, but for now, I'll just say that this is the kind of thing Scott Adams is referring to when he says that what Trump has said is "emotionally true" even if it "doesn't pass the fact checks" (see dialogue with Sam Harris).

In a world of inequality, you pick your poison. Communicate what truths can be received by your audience, or...be a nerd, and stay out of elections.

Prediction should be a sport

10 chaosmage 10 August 2017 07:55AM

So, I've been thinking about prediction markets and why they aren't really catching on as much as I think they should.

My suspicion is that (beside Robin Hanson's signaling explanation, and the amount of work it takes to get to the large numbers of predictors where the quality of results becomes interesting) the basic problem of prediction markets is that they look and feel like gambling. Or at best like the stock market, which for the vast majority of people is no less distasteful.

Only a small minority of people are neither disgusted by nor terrified of gambling. Prediction markets right now are restricted to this small minority.

Poker used to have the same problem.

But over the last few decades Poker players have established that Poker is (also) a sport. They kept repeating that winning isn't purely a matter of luck, they acquired the various trappings of tournaments and leagues, they developed a culture of admiration for the most skillful players that pays in prestige rather than only money and makes it customary for everyone involved to show their names and faces. For Poker, this has worked really well. There are much more Poker players, more really smart people are deciding to get into Poker and I assume the art of game probably improved as well.

So we should consider re-framing prediction the same way.

The calibration game already does this to a degree, but sport needs competition, so results need to be comparable, so everyone needs to make predictions on the same events. You'd need something like standard cards of events that players place their predictions on.

Here's a fantasy of what it could look like.

  • Late in the year, a prediction tournament starts with the publication of a list of events in the coming year. Everybody is invited to enter the tournament (and maybe pay a small participation fee) by the end of the year, for a chance to be among the best predictors and win fame and prizes.
  • Everyone who enters plays the calibration game on the same list of events. All predictions are made public as soon as the submission period is over and the new year begins. Lots of discussion of each event's distribution of predictions.
  • Over the course of the year, events on the list happen or fail to happen. This allows for continually updated scores, a leaderboard and lots of blogging/journalistic opportunities.
  • Near the end of the year, as the leaderboard turns into a shortlist of potential winners, tension mounts. Conveniently, this is also when the next tournament starts.
  • At new year's, the winner is crowned (and I'm open to having that happen literally) at a big celebration which is also the end of the submission period for the next tournament and the revelation of what everyone is predicting for this next round. This is a big event that happens to be on a holiday, where more people have time for big events.
Of course this isn't intended to replace existing prediction markets. It is an addition to those, a fun and social thing with lots of PR potential and many opportunities to promote rationality. It should attract people to prediction who are not attracted to prediction markets. And it could be prototyped pretty cheaply, and developed further if it is as much fun as I think it would be.

People don't have beliefs any more than they have goals: Beliefs As Body Language

9 Bound_up 25 July 2017 05:41PM

Many people, anyway.

 

There is a common mistake in modeling humans, to think that they have actual goals, and that they deduce their actions from those goals. First there is a goal, then there is an action which is born from the goal. This is wrong.

More accurately, humans have a series of adaptations they execute. A series of behaviors which, under certain circumstances, kinda-sorta-maybe aim at the same-ish thing (like inclusive genetic fitness), but which will be executed regardless of whether or not they actually hit that thing.

Actions are not deduced from goals. The closest thing we have to "goals" are inferred from a messy conglomerate of actions, and are only an approximation of the reality that is there: just a group of behaviors.

-

I've come to see beliefs as very much the same way. Maybe some of us have real beliefs, real models. Some of us may in fact choose our statements about the world by deducing them from a foundational set of beliefs.

The mistake is repeated when we model most humans as having actual beliefs (nerds might be an exception). To suppose that their statements about reality, their propositions about the world, or their answers to questions are deduced from some foundational belief. First there is a belief, then there is a report on that belief, provided if anyone inquires about the belief they're carrying around. This is wrong.

More accurately, humans have a set of social moves/responses that they execute. Some of those moves APPEAR (to the naive nerd such as I) to be statements about how and what reality is. Each of these "statements" was probably vetted and accepted individually, without any consideration for the utterly foreign notion that the moves should be consistent or avoid contradiction. This sounds as tiresome to them as suggesting that their body language, or dance moves should be "consistent," for to them, the body language, dance moves, and "statements about reality" all belong to the same group of social moves, and thinking a social move is "consistent" is like thinking a certain posture/gesture is consistent or a color is tasty.

And missing the point like a nerd and taking things "literally" is exactly the kind of thing that reveals low social acuity.

Statements about individual beliefs are not deduced from a model of the world, just like actions are not deduced from goals. You can choose to interpret "I think we should help poor people" as a statement about the morality of helping poor people, if you want to miss the whole point, of course. We can suppose that "XYZ would be a good president" is a report on their model of someone's ability to fulfill a set of criteria. And if we interpret all their statement as though they were actual, REAL beliefs, we might be able to piece them together into something sort of like a model of the world.

All of which is pointless, missing the point, and counter-productive. Their statements don't add up to a model like ours might, anymore than our behaviors really add up to a goal. The "model" that comes out of aggregating their social learned behaviors will likely be inconsistent, but if you think that'll matter to them, you've fundamentally misunderstood what they're doing. You're trying to find their beliefs, but they don't HAVE any. There IS nothing more. It's just a set of cached responses. (Though you might find, if you interpret their propositions about reality as signals about tribal affiliation and personality traits, that they're quite consistent).

"What do you think about X" is re-interpreted and answered as though you had said "What do good, high-status groups (that you can plausibly be a part of) think about X?"

"I disagree" doesn't mean they think your model is wrong; they probably don't realize you have a model. Just as you interpret their social moves as propositional statements and misunderstand, so they interpret your propositional statements as social moves and misunderstand. If you ask how their model differs from yours, it'll be interpreted as a generic challenge to their tribe/status, and they'll respond like they do to such challenges. You might be confused by their hostility, or by how they change the subject. You think you're talking about X and they've switched to Y. While they'll think you've challenged them, and respond with a similar challenge, the "content" of the sentences need not be considered; the important thing is to parry the social attack and maybe counter-attack. Both perspectives make internal sense.

As far as they're concerned, the entire meaning of your statement was basically equivalent to a snarl, so they snarled back. Beliefs As Body Language.

Despite the obvious exceptions and caveats, this has been extremely useful for me in understanding less nerdy people. I try not to take what to them are just the verbal equivalent of gestures/postures or dance moves, and interpret them as propositional statements about the nature of reality (even though they REALLY sound like they're making propositional statements about the nature of reality), because that misunderstands what they're actually trying to communicate. The content of their sentences is not the point. There is no content. (None about reality, that is. All content is social). They do not HAVE beliefs. There's nothing to report on.

[Link] Tenenbaum et al. (2017) on the computational mechanisms of learning a commonsense moral theory

1 Kaj_Sotala 25 July 2017 01:36PM

Book Review: Mathematics for Computer Science (Suggestion for MIRI Research Guide)

11 richard_reitz 22 July 2017 07:26PM

tl;dr: I read Mathematics for Computer Science (MCS) and found it excellent. I sampled Discrete Mathematics and Its Applications (Rosen)—currently recommended in MIRI's research guide—as well as Concrete Mathematics and Discrete Mathematics with Applications (Epp), which appear to be MCS's competition. Based on these partial readings, I found MCS to be the best overall text. I therefore recommend MIRI change the recommendation in its research guide.

Introduction

MCS is used at MIT for their introductory discrete math course, 6.042, which appears to be taken primarily by second-semester freshman and sophomores. You can find OpenCourseWare archives from 2010 and 2015, although the book is self-contained; I never had occasion to use them throughout my reading. 

If you liked Computability and Logic (review), currently in the MIRI research guide, you'll like MCS:

MCS is a wonderful book. It's well written. It's rigorous, but does a nice job of motivating the material. It efficiently proves a number of counterintuitive results and then helps you see them as intuitively obvious. Freed from the constraint of printing cost, it contains many diagrams which are generally useful. You can find the pdf here or, if that link breaks, by googling "Mathematics for Computer Science". (See section 21.2 for why this works.)

MCS is regularly updated during the semester. Based on the dates of revision given to the cover, I suspect that the authors attempt to update it within a week of the last update during the semester. The current version is 87 pages longer than the 2015 version, suggesting ~40 pages of material is added a year. My favorite thing about the constant updates was that I never needed to double check statements about our current state of knowledge to see if anything had changed since publication.

MCS is licensed under a Creative Commons attribution share-alike license: it is free in the sense of both beer and freedom. I'm a big fan of such copyleft licenses, so I give MIT major props. I've tried to remain unbiased in my review, but halo effect suggests my views on the text might be affected by the text's license: salt accordingly.

Prerequisites

The only prerequisite is single-variable calculus. In particular, I noted integration, differentiation, and convergence/infinite sums coming up. That said, I don't remember seeing them coming up in sections that provided a lot of dependencies: with just a first course in algebra, I feel a smart 14-year-old could get through 80–90% of the book, albeit with some help, mostly in places where "do a bunch of algebra" steps are omitted. An extra 4–5 years of practice doing algebraic manipulations makes a difference.

MCS is also an introduction to proofwriting. In my experience, writing mathematical proofs is a skill complex enough to require human feedback to get all the nuances of why something works and why something else doesn't work and why one approach is better than another. If you've never written proofs before and would like a human to give you feedback, please pm me.

Comparison to Other Discrete Math Texts

Rosen

I randomly sampled section 4.3 of Rosen, on primes and greatest common divisors and was very unimpressed. Rosen states the fundamental theorem of arithmetic without a proof. The next theorem had a proof which was twice as long and half as elegant as it could have been. The writing was correct but unmotivating and wordy. For instance, Rosen writes "If n is a composite integer", which is redundant, since all composite numbers are integers, so he could have just said "If n is composite".

In the original Course Recommendations for Friendliness Researchers, Louie responded to Rosen's negative reviews:

people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Based on the sample I read, Rosen is significantly dumbed-down relative to MCS. Rosen does not prove the fundamental theorem of arithmetic whereas MCS proves it in section 9.4. For the next theorem, Rosen gives an inelegant proof when a much sleeker—but reasonably evident!—proof exists, making it feel like Rosen expected the reader to not be able to follow the sleeker proof. Rosen's use of "composite integer" instead of "composite" seems like he assumes the reader doesn't understand that the only objects one describes as composite are integers; MCS does not contain the string "composite integer".

In the section I read, Rosen has worked examples for finding gcd(24, 36) and gcd(17, 22), something I remember doing when I was 12. It's almost like Rosen was spoon-feeding how to guess the teacher's password for the student to regurgitate on an exam instead of building insight.

Concrete Mathematics

There are probably individuals who would prefer Concrete Mathematics to MCS. These people are probably into witchcraft.

I explain by way of example. In section 21.1.1, MCS presents a very sleek, but extremely nonobvious, proof of gambler's ruin using a clever argument courtesy of Pascal. In section 21.1.2, MCS gives a proof that doesn't require the reader to be "as ingenuious Pascal [sic]". As an individual who is decidedly not as ingenious as Pascal was, I appreciate this.

More generally, say we want to prove a theorem that looks something like "If A, then B has property C." You start at A and, appealing to the definition of C, show that B has it. There's probably some cleverness involved in doing so, but you start at the obvious place (A), end in the obvious place (B satisfies the definition of C), and don't rely on any crazy, seemingly-unrelated insights. Let's call this sort of proof mundane.

(Note that mundane is far from mechanical. Most of the proofs in Baby Rudin are mundane, but require significant cleverness and work to generate independently.)

There is a virtue in mundane proofs: a smart reader can usually generate them after they read the theorem but before they read its proof. Doing is beneficial, since proof-generating makes the theorem more memorable. It also gives the reader practice building intuition by playing around with the mathematical objects and helps them improve their proofwriting by comparing their output to a maximally refined proof.

On the end of the spectrum opposing mundane is witchcraft. Proofs that use witchcraft typically have a step where you demonstrate you're as ingenious as Pascal by having a seemingly-unrelated insight that makes everything easier. Notice that, even if you are as ingenious as Pascal, you won't necessarily be able to generate these insights quickly enough to get through the text at any reasonable pace.

For the reasons listed above, I prefer mundane proofs. This isn't to say MCS is devoid of witchcraft: sometimes it's the best or only way of getting a proof. The difference is that MCS uses mundane proofs whenever possible whereas Concrete Mathematics invokes witchcraft left and right. This is why I don't recommend it.

Individuals who are readily as ingenious as Pascal, don't want the skill-building benefits of mundane proofs, or prefer the whimsy of witchcraft may prefer Concrete Mathematics.

Epp

I randomly sampled section 12.2 of Epp and found it somewhat dry but wholly unobjectionable. Unlike Rosen, I felt like Epp was writing for an intelligent human being (though I was reading much further along in the book, so maybe Rosen assumed the reader was still struggling with the idea of proof). Unlike Concrete Mathematics, I detected no witchcraft. However, I felt that Epp had inferior motivation and was written less engagingly. Epp is also not licensed under Creative Commons.

Coverage

Epp, Rosen, and MCS are all ~1000 pages long, whereas Concrete Mathematics is ~675. To determine what these books covered that might not be in MCS, I looked through their table of contents' for things I didn't recognize. The former three have the same core coverage, although Epp and Rosen go into material you would find in Computability and Logic or Sipser (also part of the research guide), whereas MCS spends more time developing discrete probability. Based on the samples I read, Epp and MCS have about the same density, whereas Rosen spends little time building insight and a lot of time showing how to do really basic, obvious stuff. I would expect Epp and MCS to have roughly the same amount of content covering mostly (but not entirely) the same stuff and Rosen to offer a mere shadow of the insight of the other two.

Concrete Mathematics seems to contain a subset of MCS's topics, but from the sections I read, I expect the presentation to be wildly different.

Complaints

My only substantial complaint about MCS is that, to my knowledge, the source LaTeX is not available. Contrast this to SICP, which has the HTML available. This resulted in a proliferation of PDFs tailored for different use cases. It'd be nice, for instance, to have a print-friendly version of MCS (perhaps with fewer pages), plus a version that fit nicely onto the small screen of an ereader or mobile device, plus a version with the same aspect ratio as my monitor. This all would be extremely easy to generate given the source. It would also facilitate crowdsourcing proofreading: there are more than a few typos, although they don't preempt comprehension. At the very least, I wish there were somewhere to submit errata.

Some parts of MCS were notation-heavy. To quote what a professor once wrote on a problem set of mine:

I'm not sure all the notation actually serves the goal of clarifying the argument for the reader. Of course, such notation is sometimes needed. But when it is not needed, it can function as a tool with which to bludgeon the reader…

I found myself referring to Wikipedia's glossary of graph theory terms more than a few times when I was making definitions to put into Anki. Not sure if this is measuring a weak section or a really good glossary or something else.

A Note on Printing

A lot of people like printed copies of their books. One benefit of MCS I've put forward is that it's free (as in beer), so I investigated how much printing would cost.

I checked the local print shops and Kinko's online was unable to find printing under $60, a typical price around $70, with the option to burn $85 if I wanted nicer paper. This was more than I had expected and between ⅓ and ½ (ish) the price of Rosen or Epp.

Personally, I think printing is counterproductive, since the PDF has clickable links.

Final Thoughts

Despite sharing first names, I am not Richard Stallman. I prefer the license on MCS to the license on its competitors, but I wouldn't recommend it unless I thought the text itself was superior. I would recommend baby Rudin (nonfree) over French's Introduction to Real Analysis; Hoffman and Kunze's Linear Algebra (nonfree) over Jim Hefferson's Linear Algebra; and Epp over 2010!MCS. The freer the better, but that consideration is trumped by the quality of the text. When you're spending >100 hours working out of a book that provides foundational knowledge for the rest of your life, ~$150 and a loss of freedom is a price many would pay for better quality.

Eliezer writes:

Tell a real educator about how Earth classes are taught in three-month-sized units, and they would’ve sputtered and asked how you can iterate fast enough to learn how to teach that.

Rosen is in its seventh edition. Epp is in its fourth edition and Concrete Mathematics its second. The earliest copy of MCS I've happened across comes from 2004. Near as I can tell, it is improved every time the authors go through the material with their students, which would put it in its 25th edition.

And you know what? It's just going to keep getting better faster than anything else.

Acknowledgements

Thank you to Gram Stone for reviewing drafts of this review.

Rationalist sites worth archiving?

23 gwern 11 September 2011 03:24PM

One of my long-standing interests is in writing content that will age gracefully, but as a child of the Internet, I am addicted to linking and linkrot is profoundly threatening to me, so another interest of mine is in archiving URLs; my current methodology is a combination of archiving my browsing in public archives like Internet Archive and locally, and proactively archiving entire sites. Anyway, sites I have previously archived in part or in total include:

  1. LessWrong (I may've caused some downtime here, sorry about that)
  2. OvercomingBias
  3. SL4
  4. Chronopause.com
  5. Yudkowsky.net (in progress)
  6. Singinst.org
  7. PredictionBook.com (for obvious reasons)
  8. LongBets.org & LongNow.org
  9. Intrade.com
  10. Commonsenseatheism.com
  11. finney.org
  12. nickbostrom.com
  13. unenumerated.blogspot.com & http://szabo.best.vwh.net/
  14. weidai.com
  15. mattmahoney.net
  16. aibeliefs.blogspot.com

Having recently added WikiWix to my archival bot, I was thinking of re-running various sites, and I'd like to know - what other LW-related websites are there that people would like to be able to access somewhere in 30 or 40 years?

(This is an important long-term issue, and I don't want to miss any important sites, so I am posting this as an Article rather than the usual Discussion. I already regret not archiving Robert Bradbury's full personal website - having only his Matrioshka Brains article - and do not wish to repeat the mistake.)

Rescuing the Extropy Magazine archives

18 Deku-shrub 01 July 2017 02:25PM

Possibly of more interest to old school Extropians, you may be aware the defunct Extropy Institute's website is very slow and broken, and certainly inaccessible to newcomers.

Anyhow, I have recently pieced together most of the early publications, 1988 - 1996 of 'Extropy: Vaccine For Future Shock' later, Extropy: Journal of Transhumanist Thought, as a part of mapping the history of Extropianism.

You'll find some really interesting very early articles on neural augmentation, transhumanism, libertarianism, AI (featuring Eliezer), radical economics (featuring Robin Hanson of course) and even decentralised payment systems.

Along with the ExI mailing list which is not yet wikified, it provides a great insight into early radical technological thinking, an era mostly known for the early hacker movement.

Let me know your thoughts/feedback!

https://hpluspedia.org/wiki/Extropy_Magazines

What useless things did you understand recently?

7 cousin_it 28 June 2017 07:32PM

Please reply in the comments with things you understood recently. The only condition is that they have to be useless in your daily life. For example, "I found this idea that defeats procrastination" doesn't count, because it sounds useful and you might be deluded about its truth. Whereas "I figured out how construction cranes are constructed" qualifies, because you aren't likely to use it and it will stay true tomorrow.

I'll start. Today I understood how Heyting algebras work as a model for intuitionistic logic. The main idea is that you represent sentences as shapes. So you might have two sentences A and B shown as two circles, then "A and B" is their intersection, "A or B" is their union, etc. But "A implies B" doesn't mean one circle lies inside the other, as you might think! Instead it's a shape too, consisting of all points that lie outside A or inside B (or both). There were some other details about closed and open sets, but these didn't cause a problem for me, while "A implies B" made me stumble for some reason. I probably won't use Heyting algebras for anything ever, but it was pretty fun to figure out.

Your turn!

PS: please don't feel pressured to post something super advanced. It's really, honestly okay to post basic things, like why a stream of tap water narrows as it falls, or why the sky is blue (though I don't claim to understand that one :-))

Idea for LessWrong: Video Tutoring

13 adamzerner 23 June 2017 09:40PM

Update 7/9/17: I propose that Learners individually reach out to Teachers, and set up meetings. It seems like the most practical way of getting started, but I am not sure and am definitely open to other ideas. Other notes:

  • There seems to be agreement that the best way to do this is individualized guidance, rather than lectures and curriculums. Eg. the Teacher "debugging" the Learner. Assuming that approach, it is probably best for the amount of Learners in a session to be small.
  • Consider that it may make sense for you to act as a Teacher, even if you don't have a super strong grasp of the topic. For example, I know a decent amount about computer science, but don't have a super strong grasp of it. Still, I believe it would be valuable for me to teach computer science to others. I can definitely offer value to people with no CS background. And for people who do have a CS background, there could be value in us taking turns teaching/learning, and debugging each other.
  • We may not be perfect at this in the beginning, but let's dive in and see what we can do! I think it'd be a good idea to comment on this post with what did/didn't work for you, so we as a group could learn and improve.
  • I pinned http://lesswrong.com/r/discussion/lw/p69/idea_for_lesswrong_video_tutoring/ to #productivity on the LessWrongers Slack group.

Update 6/28/17: With 14 people currently interested, it does seem that there's enough to get started. However, I'd like to give it a bit more time and see how much overall interest we get.

Idea: we coordinate to teach each other things via video chat.

  • We (mostly) all like learning. Whether it be for fun, curiosity, a stepping stone towards our goals.
  • My intuition is that there's a lot of us who also enjoy teaching. I do, personally.
  • Enjoyment aside, teaching is a good way of solidifying ones knowledge.
  • Perhaps there would be positive unintended consequences. Eg. socially.
  • Why video? a) I assume that medium is better for education than simply text. b) Social and motivational benefits, maybe. A downside to video is that some may find it intimidating.
  • It may be nice to evolve this into a group project where we iteratively figure out how to do a really good job teaching certain topics.
  • I see the main value in personalization, as opposed to passive lectures/seminars. Those already exist, and are plentiful for most topics. What isn't easily accessible is personalization. With that said, I figure it'd make sense to have about 5 learners per teacher.

So, this seems like something that would be mutually beneficial. To get started, we'd need:

  1. A place to do this. No problem: there's Hangouts, Skype, https://talky.io/, etc.
  2. To coordinate topics and times.

Personally, I'm not sure how much I can offer as far as doing the teaching. I worked as a web developer for 1.5 years and have been teaching myself computer science. I could be helpful to those unfamiliar with those fields, but probably not too much help for those already in the field and looking to grow. But I'm interested in learning about lots of things!

Perhaps a good place to start would be to record in some spreadsheet, a) people who want to teach, b) what topics, and c) who is interested in being a Learner. Getting more specific about who wants to learn what may be overkill, as we all seem to have roughly similar interests. Or maybe it isn't.

If you're interested in being a Learner or a Teacher, please add yourself to this spreadsheet.

Thought experiment: coarse-grained VR utopia

15 cousin_it 14 June 2017 08:03AM

I think I've come up with a fun thought experiment about friendly AI. It's pretty obvious in retrospect, but I haven't seen it posted before. 

When thinking about what friendly AI should do, one big source of difficulty is that the inputs are supposed to be human intuitions, based on our coarse-grained and confused world models. While the AI's actions are supposed to be fine-grained actions based on the true nature of the universe, which can turn out very weird. That leads to a messy problem of translating preferences from one domain to another, which crops up everywhere in FAI thinking, Wei's comment and Eliezer's writeup are good places to start.

What I just realized is that you can handwave the problem away, by imagining a universe whose true nature agrees with human intuitions by fiat. Think of it as a coarse-grained virtual reality where everything is built from polygons and textures instead of atoms, and all interactions between objects are explicitly coded. It would contain player avatars, controlled by ordinary human brains sitting outside the simulation (so the simulation doesn't even need to support thought).

The FAI-relevant question is: How hard is it to describe a coarse-grained VR utopia that you would agree to live in?

If describing such a utopia is feasible at all, it involves thinking about only human-scale experiences, not physics or tech. So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk. Then we could launch a powerful AI aimed at rebuilding reality to match it (more concretely, making the world's conscious experiences match a specific coarse-grained VR utopia, without any extra hidden suffering). That's still a very hard task, because it requires solving decision theory and the problem of consciousness, but it seems more manageable than solving friendliness completely. The resulting world would be suboptimal in many ways, e.g. it wouldn't have much room for science or self-modification, but it might be enough to avert AI disaster (!)

I'm not proposing this as a plan for FAI, because we can probably come up with something better. But what do you think of it as a thought experiment? Is it a useful way to split up the problem, separating the complexity of human values from the complexity of non-human nature?

View more: Next