scafaria

Vince is CEO of DotAlign, a NYC-based software company. DotAlign provides privacy-first software for enterprise relationship intelligence (“who knows who well”) and CRM enhancement. Vince graduated summa cum laude with a BS degree in economics from the Wharton School at the University of Pennsylvania and is a Chartered Financial Analyst. He dropped out of the Wharton MBA program to found the fintech startup DealMaven, sold to FactSet (NYSE:FDS). Vince has taught valuation and financial modeling to thousands of financiers. Previously, he was an investment banker and private equity deal professional with Donaldson, Lufkin & Jenrette. Vince holds several patents related to privacy-first data sharing and analytics and has 20 years of experience as a hands-on software engineer. Vince fervently believes that technologies fostering coordination, communication, and trust hold the key to his someday-grandchildren inheriting the planet they deserve.

Wiki Contributions

Comments

Sorted by

A huge concern I have is "longtermism".

https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Someone could read what I've written and say "yes, fulfill humanity's potential". I don't think those are the same thing. Longtermism looks thousands or millions of years ahead, caring much less about the present day unless it resulted in total extinction or equivalent. As we'd say in finance, the "discount rate" matters (weighing present vs. future). I believe the metric should be measured (a) accounting for *changes* to state (so democracies falling to tyranny matters, curing tropical diseases matters, etc.) and (b) placing nearly all value on those currently living and their children and grand-children. On (b) we all want to leave a better world for our children and grandchildren; and we want them to leave a better world for their children and so on. But we'd place a higher value on our own (already alive) children than our distant potential descendants 20 generations down the line. I think that same instinct needs to be preserved when considering weights of current vs. future. 

One aspect that longtermists and I would agree on is that collective action problems (described in the last paragraph) need to be solved if we are to create a better world (or even preserve it) for ourselves and future generations.

Thanks. Planetary scale collective action is the Big Goal. Right now, the dominant social platforms that form the public square are so far from that vision that I was trying to start with the question of “collective action in what direction”? For that, I wanted to make a moral argument without invoking religion or left/right bias. Anyhow, that was the goal. Thanks again. Minus 3 so far, so I guess I need to learn this forum better. Just trying to stand up to Moloch. Peace. 

Hi Giskard, 

Yes to your "more utility" point. I am influenced by Robert Wright, who makes a compelling and direct case that communication and trust are what make positive-sum outcomes possible (Nonzero and here). And he points out that societies or organizations that generate those positive-sum effects will outcompete those that devolve in a race to the bottom. 

Re your comment "Maybe the cooperating group is acting under a norm that is more complex than just 'always cooperate', that allows a state of Cooperate/Cooperate to become stable?", Yes, that's exactly it! Civilization is a multipolar game, as Scott Alexander points out in Meditations on Moloch and also in the article you cite ('...and the general case is called “civilization”'). 

In Moloch, Alexander points out all sorts of multipolar traps. Yet on the whole society has moved forward (at least since the 1600s) by developing sufficient complexity which governs our interactions. Fortunately, we don't live in a simple PD game played only once or played anonymously (both of which strongly disfavor cooperation). Our personal relationships, reputations, sense of shame, and fear of downstream consequences make real life different from the simplest PD game. They provide enough nuance and complexity that on the whole we benefit from "inheriting a cultural norm and not screwing it up" (the article you cite). 

Here's my premise: Up until now in our digital lives we have lacked agency. Our online communications tend to either be centralized (governed by a Zuckerberg) or else anonymous (where reputation, relationships, sense of shame, and fear of downstream consequences don't apply). With the former we lack agency because the medium is not designed to support our individual interests or even human flourishing (as breaking news today about Facebook reminds us). With the latter, the medium often lacks the requisite complexity that forms our cultural norm inheritance in the offline world. 

Online life today is merely an inadequate equilibrium, to use Eliezer Yudkowsky's term. The purpose of my essay and the re-post here is to ask, "Would the following set of changes (which I attempt to articulate) allow for digital interactions that break free from PD and allow us to solve collective action problems?" How could we design digital interactions so that they represent a multiplayer game with a structure that stacks the deck in favor of positive-sum outcomes? My optimistic conclusion is that the online world (leaning on decentralized identifiers, zero-knowledge proofs, etc.) can offer game design structures that are more advantageous to positive-sum outcomes than anything we've yet seen offline (and certainly better than today's online designs). 

Ironically, tipping out of today's inadequate equilibrium is itself a collective action problem. And as I say, "Until individuals regain agency in their digital social interactions, coordinating for positive sum collective action is hard". Fortunately, I believe there is now (finally) a fulcrum for a tipping point that does not rely on collective action. Now that advertisers can no longer exploit personal identifiers in the same way, I believe they will be forced to explore models that make media firms more money anyway! (My point re the "Barbados" example). 

Scott Alexander, Robert Wright, many of you on this forum, and I have long thought about how to achieve more positive sum outcomes (how to defeat Moloch). Usually we look with hope to morality and rationality, yet we know how powerful a force Moloch really is. That's where I become excited that I believe that (finally!) now there can be a tipping point via: (i) Capitalist incentives plus (ii) the shock to today's equilibrium of Apple/Google's privacy announcements plus (iii) (not required, but bonus!) regulatory and other pressures owing to revelations about Facebook. 

Thanks so much for engaging on the essay. I'm optimistic that there really is a path toward a better equilibrium, and it helps to bounce the ideas off smart people. Glad to have this forum! 

Great, thanks gain

Thanks, Maxwell. That could be. I'm working toward a book, so I built a website around that very long essay. My goal posting here to LessWrong was to see if there really is an opportunity for "World Optimization" / a better equilibrium human condition growing out of those concepts. If the mods of the site think it worthwhile to repost in full (and to sanitize for anything promotional), I can. If not, that's fine too and I'm grateful for the opportunity. I will continue refining ideas toward improving our economy and society. Thanks! 

Looks like I was supposed to post the full text here instead of a link, is that right? (I'm new to LessWrong). Thanks.