All of fortyeridania's Comments + Replies

Two points:

First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?

Second, I appreciate this post because what Harris's disagreements with others so often need is exactly dissolution. And you've accurately described Harris's project: He is trying... (read more)

3Tyrrell_McAllister
I don't mean to distinguish it from logical proof in the everyday sense of that term. Rational persuasion can be as logically rigorous as the circumstances require. What I'm distinguishing "rational persuasion" from is a whole model of moral argumentation that I'm calling "logical argumentation" for the purposes of this post. If you take the model of logical argumentation as your ideal, then you act as if a "perfect" moral argument could be embedded, from beginning to end, from axiomatic assumptions to "ought"-laden conclusions, as a formal proof in a formal logical system. On the other hand, if you're working from a model of dialectical argumentation, then you act as if the natural endpoint is to persuade a rational agent to act. This doesn't mean that any one argument has to work for all agents. Harris, for example, is interested in making arguments only to agents who, in the limit of ideal reflection, acknowledge that a universe consisting exclusively of extreme suffering would be bad. However, you may think that you could still find arguments that would be persuasive (in the limit of ideal reflection) to nearly all humans. For the purposes of this post, I'm leaving much of this open. I'm just trying to describe how people are guided by various vague ideals about what ideal moral argumentation "should be". But you're right that the word "rational" is doing some work here. Roughly, let's say that you're a rational agent if you act effectively to bring the world into states that you prefer. On this ideal, to decide how to act, you just need information about the world. Your own preferences do the work of using that information to evaluate plans of action. However, you aren't omniscient, so you benefit from hearing information from other people and even from having them draw out some of its implication for you. So you find value in participating in conversations about what to do. Nonetheless, you aren't affected by rhetorical fireworks, and you don't get overwhe

Summary:

Regardless of whether one adopts a pessimistic or optimistic view of artificial intelligence, policy will shape how it affects society. This column looks at both the policies that will influence the diffusion of AI and policies that will address its consequences. One of the most significant long-run policy issues relates to the potential for artificial intelligence to increase inequality. 

The author is Selmer Bringsjord.

Academic: https://homepages.rpi.edu/~brings/

Wikipedia: https://en.wikipedia.org/wiki/Selmer_Bringsjord

gjm150

Bringsjord is the author of a "proof" that P=NP. It is ... not an impressive piece of work, or at least I don't find it so. And it fails to be impressive in a way that seems highly relevant to philosophizing about AI. Namely, Bringsjord seems to think he's entitled to leap from "such-and-such a physical system seems like it correctly finds optimal solutions to small instances of the Steiner tree problem" to "such-and-such a physical system will somehow find the optimal solution in every instance of the Steiner tree proble... (read more)

Author:

  • Website: https://www.joshuagans.com
  • Wikipedia: https://en.wikipedia.org/wiki/Joshua_Gans

Summary:

Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who cr
... (read more)

Thanks to Alex Tabarrok at Marginal Revolution: https://marginalrevolution.com/marginalrevolution/2018/05/one-parameter-equation-can-exactly-fit-scatter-plot.html

Title: "One parameter is always enough"

Author: Steven T. Piantadosi, ( University of Rochester)

Abstract:

We construct an elementary equation with a single real valued parameter that is capable of fitting any “scatter plot” on any number of points to within a fixed precision. Specifically, given given a fixed  > 0, we may construct fθ so that for any collection of ordered pairs {(xj
... (read more)

HT to Tyler Cowen: https://marginalrevolution.com/marginalrevolution/2018/05/erik-brynjolfsson-interviews-daniel-kahneman.html

5Sniffnoy
Note also you could easily see your initial comment to be wrong just by computing the truth tables! Equivalence in classical propositional logic is pretty easy -- you don't need to think about proofs, just write down the truth tables!

The term "affordance width" makes sense, but perhaps there's no need to coin a new term when "tolerance" exists already.

rossry120

I think I disagree; "tolerance", to me, seems to point more towards the special case where {B} is some external events and {X} and {Y} are internal reactions. To talk about social affordances, as the OP does, you'd have to talk about the tolerances of others for {B} done by different people [A-E] -- and you've made less obvious the fact that the tolerance of [Q] for {B} done by [A] is different than the tolerance of [Q] for {B} done by [E] -- the entire content of the post.

A ∨ B ⟷ ¬A ⟶ B

But this is not true, because ¬(¬A ⟶ B) ⟶ A ∨ B. With what you've written you can get from the left side to the right side, but you can't get from the right side to the left side.

What you need is: "Either Alice did it or Bob did it. If it wasn't Alice, then it was Bob; and if it wasn't Bob, then it was Alice."

Thus: A ∨ B ⟷ (¬A ⟶ B ∧ ¬B ⟶ A)

9Sniffnoy
No, what he's written is correct[0]. A ∨ B, ¬A ⟶ B, and ¬B ⟶ A are all equivalent. (Hence your last statement is also correct!) Note for instance that the last two are just contrapositives of one another and so equivalent. [0]In classical logic, obviously, for the nitpickers; that's all I'm going to consider here.

Interesting post, and I'm sure "not having thought of it" helps explain the recency of vehicular attacks (though see the comment from /r/CronoDAS questioning the premise that they are as recent as they may seem).

Another factor: Other attractive methods, previously easy, are now harder--lowering the opportunity cost of a vehicular attack. For example, increased surveillance has made carefully coordinated attacks harder. And perhaps stricter regulations have made it harder to obtain bomb-making materials or disease agents.

This also helps to ex... (read more)

How much support is there for promotion of prediction markets? I see three levels:

1. Legalization of real-money markets (they are legal in some places, but their illegality or legal dubiousness in the US--combined with the centrality of the US companies in global finance--makes it hard to run a big one without US permission)

2. Subsidies for real-money markets in policy-relevant issues, as advocated by Robin Hanson

3. Use of prediction markets to determine policy (futarchy), as envisioned by Robin Hanson

1. We want public policy that's backed up by empiric evidence. We want a government that runs controlled trials to find out what policies work.

This seems either empty (because no policy has zero empirical backing), throttling (because you can't possibly have an adequate controlled trial on every proposal), or pointless (because most political disputes are not good-faith disagreements over empirical support).

Second, as this list seems specific to one country, I wonder how rationalists who don't follow its politics can inform this consensus.... (read more)

Upvoted for the suggestion to reword the euthanasia point.

Useful distinction: "rationalist" vs. "rational person." By the former I mean someone who deliberately strives to be the latter. By the latter I mean someone who wins systematically in their life.

It's possible that rationalists tend to be geeks, especially if the most heavily promoted methods for deliberately improving rationality are mathy things like explicit Bayesian reasoning, or if most of the material advocating rationality is heavily dependent on tech metaphors.

Rational people need not fit the stereotypes you've listed.... (read more)

Hey, I just saw this post. I like it. The coin example is a good way to lead in, and the non-quant teacher example is helpful too. But here's a quibble:

If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence.

The map is not the territory; things are still true or false. Bayes' theorem doesn't say anything about the nature of truth itself; whatever your theory of truth, that should not be affected by the acknowledgement of Bayes' theorem. Rather, it's our beliefs (or at least the beliefs of an ideal Bayesian agent) that are on a spectrum of confidence.

Introduction:

Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and lea

... (read more)

I agree with most of what you've said, but here's a quibble:

If you are an evil pharma-corp, vaccines are a terrible way to be evil.

Unless you're one of the sellers of vaccines, right?

That's too bad; it probably doesn't have to be that way. If you can articulate what infrastructural features of 1.0 are missing from 2.0, perhaps the folks at 2.0 can accommodate them in some way.

3Lumifer
The 2.0 folks made a deliberate decision to step away from "let's just all talk about stuff" towards "people should write essays and others should attend to these essays and respectfully comment".
4ChristianKl
At the moment their focus is on making 2.0 fast to load and given how important page loading time is I agree with that prioritisation. I think afterwards they want to improve the commenting experience. One critical feature that's currently missing is notifications when someone replies to one's post or comment. I could also imagine a page that allows me to see all comments on comments that I upvoted and similar ways to list comments towards which I might reply. Currently, there's no open thread on LW 2.0 and it might be a good idea to have one.

I don't remember if the Sequences cover it. But if you haven't already, you might check out SEP's section on Replies to the Chinese Room Argument.

0Erfeyah
That is great! Thanks :)
  • Scholarly article

  • Title: Do scholars follow Betteridge’s Law?

  • Answer is no

Nice.

I know this is Betteridge's law of headlines, but do you happen to know if it's accurate?

3ignoranceprior
According to this study, the law appears to be inaccurate for academic articles.

This was also explored by Benedict Evans in this blog post and this EconTalk interview, mentioned in the most recent feed thread.

4chaosmage
Wow, this is amazing! Thank you! He talks about various general effects rather than specific business opportunities, so the overlap is very small, but his vision and mine seem entirely compatible.

In addition to what you've cited, here are some methods I've used and liked:

  1. Email professors to ask for recommendations. Be polite, concise, and specific (e.g., why exactly do you want to learn more about x?).

  2. David Frum says he used to pick a random book on his chosen topic, check which books kept showing up in the footnotes, then repeat with those books. A couple rounds yielded a good picture of who the recognized authorities were. (I pointed this out in a Rationality Quotes thread in 2015. Link: http://lesswrong.com/lw/lzn/rationality_quotes_thread_a

... (read more)
1AABoyles
1a. If a professor is a suitable source for a recommendation, they've probably taught a course on the topic, and that course's syllabus may be available on the open web without emailing the professor.
2Dr_Manhattan
This is literally doing PageRank, by hand, on books. There's got to be a better way

but I don't feel them becoming habitual as I would like

Have you noticed any improvement? For example, an increase in the amount of time you feel able to be friendly? If so, then be not discouraged! If not, try changing the reward structure.

For example, you can explicitly reward yourself for exceeding thresholds (an hour of non-stop small talk --> extra dark chocolate) or meeting challenges (a friendly conversation with that guy --> watch a light documentary). Start small and easy. Or: Some forms of friendly interaction might be more rewarding than... (read more)

Is one's answer to the dilemma supposed to illuminate something about the title question? Presumably a large part of the worth-livingness of life consists in the NPV of future experiences, not just in past experiences.

  • Title question: Yes. Proof by revealed preference:

(1) Life is a good with free disposal.

(2) I am alive.

(3) Therefore, life is worth living.

  • Dilemma: Choose the second, on the odds that God changes its mind and lets you keep living, can't find you again the second time around, is itself annihilated in the interim, etc.

Quibble: A... (read more)

Maybe I should write a book!

I hope you do, so I can capitalize on my knowledge of your longstanding plan to capitalize on your knowledge of Adams' longstanding plan to capitalize on his knowledge that Trump would win with a book with a book with a book.

And even if you have one, the further that real-life market is away from the abstract free market, the less prices converge to cost + usual profit.

True.

I suspect that there is no market for unique, poorly-estimable risks.

That's probably true for most such risks, but it's worth noting that there are markets for some forms of oddball events. One example is prize indemnity insurance (contest insurance).

The formatting is broken

Fixed, thanks.

For unique, poorly-estimable risks the insurance industry had strong incentive to overprice them

Plausible, and one should certainly beware of biases like this. On the other hand, given conventional assumptions regarding the competitiveness of markets, shouldn't prices converge toward a rate that is "fair" in the sense that it reflects available knowledge?

0Lumifer
That requires a market. And even if you have one, the further that real-life market is away from the abstract free market, the less prices converge to cost + usual profit. I suspect that there is no market for unique, poorly-estimable risks.

I know this is meant to be parody, but how closely does it resemble scenario analysis in the corporate world? From what I've read about the actual use of scenario analysis (e.g., at Shell), the process takes much longer (many sessions over a period of weeks).

Second, and more importantly: suits are typically not quants, and have a tendency to misinterpret (or ignore) explicit probabilities. And they can easily place far too much confidence in the output of a specific model (model risk). In this context, switching from full-on quant models to narrative model... (read more)

Authors: Ada C. Stefanescu Schmidt, Ami Bhatt, Cass R. Sunstein

Abstract:

During medical visits, the stakes are high for many patients, who are put in a position to make, or to begin to make, important health-related decisions. But in such visits, patients often make cognitive errors. Traditionally, those errors are thought to result from poor communication with physicians; complicated subject matter; and patient anxiety. To date, measures to improve patient understanding and recall have had only modest effects. This paper argues that an understanding of t

... (read more)

Good point. You don't have to go to the gym. I used to do jumping jacks in sets of 100, several sets throughout the day. Gradually increase the number of daily sets.

What would that look like?

Concretely? I'm not sure. One way is for a pathogen to jump from animals (or a lab) to humans, and then manage to infect and kill billions of people.

Humanity existed for the great majority of its history without antibiotics.

True. But it's much easier for a disease to spread long distances and among populations than in the past.

Note: I just realized there might be some terminological confusion, so I checked Bostrom's terminology. My "billions of deaths" scenario would not be "existential," in Bostrom's se... (read more)

1Lumifer
Why would it? A pandemic wouldn't destroy knowledge or technology. Consider Black Death -- it reduced the population of Europe by something like a third, I think. Was it a big deal? Sure it was. Did it send Europe back to the time when it was populated by some hunter-gatherer bands? Nope, not even close.

It's true that the probability of an existential-level AMR event is very low. But the probability of any existential-level threat event is very low; it's the extreme severity, not the high probability, that makes such risks worth considering.

What, in your view, gets the top spot?

0gilch
I'm not sure how to rank these if the ordering relation is "nearer / more probable than". Nuclear war seems like the most imminent threat, and UFAI the most inevitable. We all know the arguments regarding UFAI. The only things that could stop the development of general AI at this point are themselves existential threats. Hence the inevitability. I think we already agree that FAI is a more difficult problem than superintelligence. But we might underestimate how much more difficult. The naiive approach is to solve ethics in advance. Right. That's not going to happen in time. Our best known alternative is to somehow bootstrap machine learning into solving ethics for us without it killing us in the mean time. This still seems really damn difficult. We've already had several close calls with nukes during the cold war. The USA has been able to reduce her stockpile since the collapse of the Soviet Union, but nukes have since proliferated to other countries. (And Russia, of course, sill has leftover Soviet nukes.) If the NPT system fails due to the influence of rogue states like Iran and North Korea, there could be a domino effect as the majority of nations that can afford it race to develop arms to counter their neighbors. This has arguably already happened in the case of Pakistan countering India, which didn't join the NPT. Now notice that Iran borders Pakistan. How long can we hold the line there? I should also point out that there are risks worse than even existential, which Bostrom called "hellish", meaning that is a human extinction event would be a better outcome than a hellish one. A perverse kind of near miss with AI is the most likely to produce such an outcome. The AI would have to be friendly enough not to kill us all for spare atoms, and yet not friendly enough to produce an outcome we would consider desirable. There are many other known existential risks, and probably some that are unknown. I've pointed out that AMR seems like a low risk, but I also think
2Lumifer
What would that look like? Humanity existed for the great majority of its history without antibiotics.

Many people have been through similar periods and overcome them, so asking around will yield plenty of anecdotal advice. And I assume you've read the old /u/lukeprog piece How to Beat Procrastination.

For me, regular exercise has helped for general motivation, energy levels, willpower--the opposite of akrasia generally. (How to bootstrap the motivation to exercise? I made a promise to a friend and she agreed to hold me accountable to exercising. It was also easier because there was someone I wanted to impress.)

Good luck. When you've got a handle on it, do s... (read more)

2hamnox
Working out has been too troublesome for me, but I do like endorphin boosts. Who needs drugs when you can get your brain to drug you for you? Anytime you're feeling down, just do some kinda movement until your muscles burn a little then stop. It takes like 10 seconds of arm flapping or 3 crunches. You can do it multiple times a day and keep a running tally to build up the initial affordance.

Yes, I have. Nuclear war lost its top spot to antimicrobial resistance.

Given recent events on the Korean peninsula it may seem strange to downgrade the risk of nuclear war. Explanation:

  • While the probability of conflict is at a local high, the potential severity of the conflict is lower than I'd thought. This is because I've downgraded my estimate of how many nukes DPRK is likely to successfully deploy. (Any shooting war would still be a terrible event, especially for Seoul, which is only about 60 km from the border--firmly within conventional artillery r

... (read more)
0MrMind
Well, before it was: runaway bioweapon > UFAI > nuclear extinction, but the recent news about the international situation made me update. As I said elsewhere, I'm adopting the outside view on all these subjects, so I will gladly stand corrected.
2gilch
Why does antimicrobial resistance rank so high in your estimation? It seems like a catastrophic risk at worst, not an existential one. New antibiotics are developed rather infrequently because they're currently not that profitable. Incentives would change if the resistance problem got worse. I don't think we've anywhere near exhausted antibiotic candidates found in nature, and even if we had, there are alternatives like phage therapy and monoclonal antibodies that we could potentially use instead.

Nothing particularly exciting comes to my mind

Property prices would fall. Sounds like a job for real-estate entrepreneurs.

  • I think they can only mean either "variance" or "badness of worst case"

In the context of financial markets, risk = variance from the mean (often measured using the standard deviation). My finance professor emphasized that although in everyday speech "risk" refers only to bad things, in finance we talk of both downside and upside risk.

-1Lumifer
That it not true, or, rather, not entirely true. VAR is very widely used in the real world and it's not variance. I also think Taleb would facepalm at this definition X-)
3PhilGoetz
So "risk" really does mean surprise to them. Do you think this impairs their ability to reason about risk? E.g., would they try to minimize their risk because that's a good thing, for the ordinary definition of risk, but then actually minimize their variance? Do they talk to clients using the word "risk", and being aware on one level that they mean something different, yet not explain the difference?

Gettier walks into a bar and is immediately greeted with the assertion that all barroom furniture is soft, unless it's a table. So he produces a counterexample.

I think atypically, just like everyone else.

When I was in law school, I devised my own idiosyncratic solution to the problem of studying a topic I knew nothing about. I'd wander into the library stacks, head to the relevant section, and pluck a book at random. I'd flip to the footnotes, and write down the books that seemed to occur most often. Then I'd pull them off the shelves, read their footnotes, and look at those books. It usually took only 2 or 3 rounds of this exercise before I had a pretty fair idea of who were the leading authorities in the field. After reading 3 or 4 of those books, I usu

... (read more)
DanielLC180

Or you can just google it, and let PageRank do all that for you.

-2buybuydandavis
The ideological Turing Test probably suffers from differences in language usage and style. It's the difference between understanding the theory, and being able to impersonate a style convincingly. As for EY's article, I think he needs to update on the evidence for bedrock differences in people's values. Just because someone is a hero in their own story, doesn't mean they're not evil in mine. And certainly, vice versa. That's just silly. They do hate freedom - by what I mean by freedom, and by what EY means by freedom.
Load More