Summary:
Regardless of whether one adopts a pessimistic or optimistic view of artificial intelligence, policy will shape how it affects society. This column looks at both the policies that will influence the diffusion of AI and policies that will address its consequences. One of the most significant long-run policy issues relates to the potential for artificial intelligence to increase inequality.
The author is Selmer Bringsjord.
Academic: https://homepages.rpi.edu/~brings/
Wikipedia: https://en.wikipedia.org/wiki/Selmer_Bringsjord
Bringsjord is the author of a "proof" that P=NP. It is ... not an impressive piece of work, or at least I don't find it so. And it fails to be impressive in a way that seems highly relevant to philosophizing about AI. Namely, Bringsjord seems to think he's entitled to leap from "such-and-such a physical system seems like it correctly finds optimal solutions to small instances of the Steiner tree problem" to "such-and-such a physical system will somehow find the optimal solution in every instance of the Steiner tree proble...
Author:
Summary:
Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who cr...
Thanks to Alex Tabarrok at Marginal Revolution: https://marginalrevolution.com/marginalrevolution/2018/05/one-parameter-equation-can-exactly-fit-scatter-plot.html
Title: "One parameter is always enough"
Author: Steven T. Piantadosi, ( University of Rochester)
Abstract:
We construct an elementary equation with a single real valued parameter that is capable of fitting any “scatter plot” on any number of points to within a fixed precision. Specifically, given given a fixed > 0, we may construct fθ so that for any collection of ordered pairs {(xj...
HT to Tyler Cowen: https://marginalrevolution.com/marginalrevolution/2018/05/erik-brynjolfsson-interviews-daniel-kahneman.html
Ah, of course. Thanks.
The term "affordance width" makes sense, but perhaps there's no need to coin a new term when "tolerance" exists already.
I think I disagree; "tolerance", to me, seems to point more towards the special case where {B} is some external events and {X} and {Y} are internal reactions. To talk about social affordances, as the OP does, you'd have to talk about the tolerances of others for {B} done by different people [A-E] -- and you've made less obvious the fact that the tolerance of [Q] for {B} done by [A] is different than the tolerance of [Q] for {B} done by [E] -- the entire content of the post.
A ∨ B ⟷ ¬A ⟶ B
But this is not true, because ¬(¬A ⟶ B) ⟶ A ∨ B. With what you've written you can get from the left side to the right side, but you can't get from the right side to the left side.
What you need is: "Either Alice did it or Bob did it. If it wasn't Alice, then it was Bob; and if it wasn't Bob, then it was Alice."
Thus: A ∨ B ⟷ (¬A ⟶ B ∧ ¬B ⟶ A)
Interesting post, and I'm sure "not having thought of it" helps explain the recency of vehicular attacks (though see the comment from /r/CronoDAS questioning the premise that they are as recent as they may seem).
Another factor: Other attractive methods, previously easy, are now harder--lowering the opportunity cost of a vehicular attack. For example, increased surveillance has made carefully coordinated attacks harder. And perhaps stricter regulations have made it harder to obtain bomb-making materials or disease agents.
This also helps to ex...
How much support is there for promotion of prediction markets? I see three levels:
1. Legalization of real-money markets (they are legal in some places, but their illegality or legal dubiousness in the US--combined with the centrality of the US companies in global finance--makes it hard to run a big one without US permission)
2. Subsidies for real-money markets in policy-relevant issues, as advocated by Robin Hanson
3. Use of prediction markets to determine policy (futarchy), as envisioned by Robin Hanson
1. We want public policy that's backed up by empiric evidence. We want a government that runs controlled trials to find out what policies work.
This seems either empty (because no policy has zero empirical backing), throttling (because you can't possibly have an adequate controlled trial on every proposal), or pointless (because most political disputes are not good-faith disagreements over empirical support).
Second, as this list seems specific to one country, I wonder how rationalists who don't follow its politics can inform this consensus....
Upvoted for the suggestion to reword the euthanasia point.
Useful distinction: "rationalist" vs. "rational person." By the former I mean someone who deliberately strives to be the latter. By the latter I mean someone who wins systematically in their life.
It's possible that rationalists tend to be geeks, especially if the most heavily promoted methods for deliberately improving rationality are mathy things like explicit Bayesian reasoning, or if most of the material advocating rationality is heavily dependent on tech metaphors.
Rational people need not fit the stereotypes you've listed....
Hey, I just saw this post. I like it. The coin example is a good way to lead in, and the non-quant teacher example is helpful too. But here's a quibble:
If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence.
The map is not the territory; things are still true or false. Bayes' theorem doesn't say anything about the nature of truth itself; whatever your theory of truth, that should not be affected by the acknowledgement of Bayes' theorem. Rather, it's our beliefs (or at least the beliefs of an ideal Bayesian agent) that are on a spectrum of confidence.
Introduction:
Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.
...We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and lea
I agree with most of what you've said, but here's a quibble:
If you are an evil pharma-corp, vaccines are a terrible way to be evil.
Unless you're one of the sellers of vaccines, right?
That's too bad; it probably doesn't have to be that way. If you can articulate what infrastructural features of 1.0 are missing from 2.0, perhaps the folks at 2.0 can accommodate them in some way.
Done.
More (German): http://karl-schumacher-privat.de
Original Eliezer post: http://lesswrong.com/lw/jq/926_is_petrov_day/
Other LW discussions:
The anniversary of the relevant event will be next Tuesday.
I don't remember if the Sequences cover it. But if you haven't already, you might check out SEP's section on Replies to the Chinese Room Argument.
Scholarly article
Title: Do scholars follow Betteridge’s Law?
Answer is no
Nice.
This was also explored by Benedict Evans in this blog post and this EconTalk interview, mentioned in the most recent feed thread.
In addition to what you've cited, here are some methods I've used and liked:
Email professors to ask for recommendations. Be polite, concise, and specific (e.g., why exactly do you want to learn more about x?).
David Frum says he used to pick a random book on his chosen topic, check which books kept showing up in the footnotes, then repeat with those books. A couple rounds yielded a good picture of who the recognized authorities were. (I pointed this out in a Rationality Quotes thread in 2015. Link: http://lesswrong.com/lw/lzn/rationality_quotes_thread_a
but I don't feel them becoming habitual as I would like
Have you noticed any improvement? For example, an increase in the amount of time you feel able to be friendly? If so, then be not discouraged! If not, try changing the reward structure.
For example, you can explicitly reward yourself for exceeding thresholds (an hour of non-stop small talk --> extra dark chocolate) or meeting challenges (a friendly conversation with that guy --> watch a light documentary). Start small and easy. Or: Some forms of friendly interaction might be more rewarding than...
Is one's answer to the dilemma supposed to illuminate something about the title question? Presumably a large part of the worth-livingness of life consists in the NPV of future experiences, not just in past experiences.
(2) I am alive.
(3) Therefore, life is worth living.
Quibble: A...
Maybe I should write a book!
I hope you do, so I can capitalize on my knowledge of your longstanding plan to capitalize on your knowledge of Adams' longstanding plan to capitalize on his knowledge that Trump would win with a book with a book with a book.
And even if you have one, the further that real-life market is away from the abstract free market, the less prices converge to cost + usual profit.
True.
I suspect that there is no market for unique, poorly-estimable risks.
That's probably true for most such risks, but it's worth noting that there are markets for some forms of oddball events. One example is prize indemnity insurance (contest insurance).
The formatting is broken
Fixed, thanks.
For unique, poorly-estimable risks the insurance industry had strong incentive to overprice them
Plausible, and one should certainly beware of biases like this. On the other hand, given conventional assumptions regarding the competitiveness of markets, shouldn't prices converge toward a rate that is "fair" in the sense that it reflects available knowledge?
I know this is meant to be parody, but how closely does it resemble scenario analysis in the corporate world? From what I've read about the actual use of scenario analysis (e.g., at Shell), the process takes much longer (many sessions over a period of weeks).
Second, and more importantly: suits are typically not quants, and have a tendency to misinterpret (or ignore) explicit probabilities. And they can easily place far too much confidence in the output of a specific model (model risk). In this context, switching from full-on quant models to narrative model...
I found this article through Marginal Revolution: http://marginalrevolution.com/marginalrevolution/2017/04/thursday-assorted-links-106.html
Authors: Ada C. Stefanescu Schmidt, Ami Bhatt, Cass R. Sunstein
Abstract:
...During medical visits, the stakes are high for many patients, who are put in a position to make, or to begin to make, important health-related decisions. But in such visits, patients often make cognitive errors. Traditionally, those errors are thought to result from poor communication with physicians; complicated subject matter; and patient anxiety. To date, measures to improve patient understanding and recall have had only modest effects. This paper argues that an understanding of t
For comparison, here are Robin Hanson's thoughts on some Mormon transhumanists: http://www.overcomingbias.com/2017/04/mormon-transhumanists.html
Good point. You don't have to go to the gym. I used to do jumping jacks in sets of 100, several sets throughout the day. Gradually increase the number of daily sets.
What would that look like?
Concretely? I'm not sure. One way is for a pathogen to jump from animals (or a lab) to humans, and then manage to infect and kill billions of people.
Humanity existed for the great majority of its history without antibiotics.
True. But it's much easier for a disease to spread long distances and among populations than in the past.
Note: I just realized there might be some terminological confusion, so I checked Bostrom's terminology. My "billions of deaths" scenario would not be "existential," in Bostrom's se...
It's true that the probability of an existential-level AMR event is very low. But the probability of any existential-level threat event is very low; it's the extreme severity, not the high probability, that makes such risks worth considering.
What, in your view, gets the top spot?
Many people have been through similar periods and overcome them, so asking around will yield plenty of anecdotal advice. And I assume you've read the old /u/lukeprog piece How to Beat Procrastination.
For me, regular exercise has helped for general motivation, energy levels, willpower--the opposite of akrasia generally. (How to bootstrap the motivation to exercise? I made a promise to a friend and she agreed to hold me accountable to exercising. It was also easier because there was someone I wanted to impress.)
Good luck. When you've got a handle on it, do s...
Yes, I have. Nuclear war lost its top spot to antimicrobial resistance.
Given recent events on the Korean peninsula it may seem strange to downgrade the risk of nuclear war. Explanation:
While the probability of conflict is at a local high, the potential severity of the conflict is lower than I'd thought. This is because I've downgraded my estimate of how many nukes DPRK is likely to successfully deploy. (Any shooting war would still be a terrible event, especially for Seoul, which is only about 60 km from the border--firmly within conventional artillery r
Nothing particularly exciting comes to my mind
Property prices would fall. Sounds like a job for real-estate entrepreneurs.
In the context of financial markets, risk = variance from the mean (often measured using the standard deviation). My finance professor emphasized that although in everyday speech "risk" refers only to bad things, in finance we talk of both downside and upside risk.
Gettier walks into a bar and is immediately greeted with the assertion that all barroom furniture is soft, unless it's a table. So he produces a counterexample.
I think atypically, just like everyone else.
...When I was in law school, I devised my own idiosyncratic solution to the problem of studying a topic I knew nothing about. I'd wander into the library stacks, head to the relevant section, and pluck a book at random. I'd flip to the footnotes, and write down the books that seemed to occur most often. Then I'd pull them off the shelves, read their footnotes, and look at those books. It usually took only 2 or 3 rounds of this exercise before I had a pretty fair idea of who were the leading authorities in the field. After reading 3 or 4 of those books, I usu
Or you can just google it, and let PageRank do all that for you.
This is related to the ideological Turing Test, as well as the LW post Are Your Enemies Innately Evil.
Two points:
First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?
Second, I appreciate this post because what Harris's disagreements with others so often need is exactly dissolution. And you've accurately described Harris's project: He is trying... (read more)