-It is better to offer no excuse than a bad one.
George Washington, letter to his niece Harriet Washington, October 30, 1791 First president of US (1732 - 1799)
-It is better to offer no excuse than a bad one.
George Washington, letter to his niece Harriet Washington, October 30, 1791 First president of US (1732 - 1799)
Contradicts "leave a retreat" - offering someone a bad excuse to get out of a situation "You're late. Was it traffic again?" might work better for the current situation than demanding why they are late.
But in politics it might make sense.
Games Thread
I've been playing the 3rd release in the Zero Escape series, Zero Time Dilemma (3DS release June 28, PC release June 29). It's a mix of VN and "escape the room" in terms of genres.
In Zero Time Dilemma, 9 people are stuck in a simulated space ship, to test the effects of people flying to mars. Because the projected flight path puts the sun between the earth and mars, there will have to be radio silence for a few days. Several hours before the radio silence is lifted, the group is forced to participate in a game by an entity called 'Zero'. Each player has to wear (or rather, wakes up wearing) a bracelet, which functions as a watch and as an injector of sleeping & amnesia drugs.
Zero has a fascination with branching of time - near the start of the game he tells a story about a runner. She runs through the park every morning. At a particular branch, she usually always goes right. However, one day, she goes left. On the left path, she comes across an old man who she sees often when running in the mornings. The old man asks her "hey, this is different from your usual route. Why did you go left?" The woman answers "because there was a snail." Later that day, police finds the woman dead in the bushes of the left path. "Isn't it curious, how one snail can affect a life so much?"
Zero forces the players to participate in "Decision Games". The players are told that...
In other words... 6 people must die.
The players are split up into 3 teams of 3 members each (for maximum tribalism?) and then are told of their first decision game - a screen turns on, showing the names of the other teams. Each team is to vote which other team should be executed. If a team has 2 or more votes, it is executed. If a team does not vote, it gains 2 votes against it.
So the game starts off with a version of prisoners dilemma. Through plot an option (A votes for B, B votes for C, C votes for A) is suggested and passed along a side channel, turning it into a true problem rather than a random guess.
Pretty much every "Decision Game" is a cruel game of ethics - where failure is met with death, and success with continued survival.
The game is cut up into 90 minute segments - after these 90 minutes are up, players are put to sleep and have their memory wiped. It's sad that none of the players seems to try and game this gimmick (not that I've seen yet, but I'm only at 20% completion or so) by writing stuff on their hands or something. But what makes this interesting is that the game plays out in time-fragments, some on one timeline, some on another.
I've pretty much been on an adrenaline high whilst playing this game - whilst you should play the earlier releases in the series if you want to connect all the plot threads (and there are a LOT), you could get away without playing them. There's enough exposition to explain plot points covered in the earlier releases, although it sometimes feels like a noodle incident ("I've seen this before, it's just like that one time with the rabbits").
If you're still unsure, I'd recommend downloading an DS emulator and a rom for 9 hours 9 persons 9 doors - it's the first release in the series and has similar gameplay, although without the traveling to story fragments.
To me, it's an interesting game where my ethics are put to the test - it's easy to say "shut up and multiply", it's another to be faced with the choice for real.
Paywall for the "conclusion" part.
This article would like to let you believe that the best path forward is not FAI Omega which will solve all our problems and that you shouldn't even try to build something like that - because, you know, think about all the jobs, and who are those tech industry guys anyway, they shouldn't be allowed to decide all this.
I understand that they'd think FAI - friendly artificial general intelligence - is maybe not where you'd want to go. AI is scary. It can do really scary things. If we could have a slower transition, we could steer more.
But I feel that their arguments are all the wrong category. You don't not solve all the problems because it would mean people are out of a job. As if a job is the only thing of importance in your life. Eat sleep work repeat.
The article also takes a dangerous stance against UFAI - "stop worrying about what AI will look like and just start". There is value in doing things.
Maybe... maybe they mean something else by AI? Maybe they're pointing at "smart algorithms" like navigation software and product recommendations. I mean, I have no idea where AI comes in with translating visual information to auditory information - but it's heralded as an AI "thing".
But, there's a disconnect here. If they mean smart algorithms and we mean AGI, then this article makes a lot more sense. Why would you go talk about ethics for making smart algorithms? Don't you see? This man can "see" because of smart algorithms! Smart algorithms are a major boon to people and the economy! Smart algorithms can help people!
And then people who mean AI as AGI say "AI could solve all our problems, if we can get it right" which is heard as "smart algorithms could solve all our problems, if we can get it right" - which sounds really optimistic. And then AI as AGI talks about something like the danger of a paperclip optimizer, and this makes no sense from the context of "smart algorithms".
"Smart algorithms" don't hack into servers to gain funds to produce tons of paperclips. At worst, it may order several tons instead of several kilos of something because of a calculation mistake, but we could solve this by making better transparent, accountable smart algorithms! Anyone who sees AI as AGI will shake their head at that; if an paperclip maximizer predicts that letting the humans see that it ordered 10 million paperclips will cause that order to be canceled and thus 10 million paperclips to not be created, it will HIDE that fact from people.
So what this article talks about is NOT AGI. It talks about smart algorithms that tech companies would build and improve, slowly improving every aspect of our lives. It then wants to steer the creation of smart algorithms in such a way that humans can contribute, rather than being left out of the picture.
"Is it not better to keep 15 people employed with assistive AI than to displace those 15 workers with machines that simply do the job with minimal oversight?"
No. It is not. I'd rather see those 15 people doing something productive, and if there truly isn't something productive to do (or maybe they can't do anything productive) I'd like to see them having a good life.
Regarding the ideas; that depends entirely on how the AI works. I'm not sure what you'd do if you knew the AI was INTP. Heck, wasn't myers-briggs flawed in the first place? Also, how is that related to ethical decisions? Can you only be ethical if you are introverted (or extroverted)?
AI as AGI thinks differently than a human would. Modeling it using human tests is bound to be interesting (in a huh I wonder what would happen way, not in expected potential), but I wonder whether it'll be useful. If you want to treat AGI as a human with human personality then you most likely have anthropomorphized AI and that's something you shouldn't do; the AI will most likely think differently.
Also...
"Can we control an AI by creating a system of motivations that causes it to generally work in an ethical way?"
Yes! We call it a "value system". If you'll read the article you linked, you'll see that it contains a big quote: "The tech industry should not dictate the values and virtues of this future."
"how do we create boundaries that stop and AI from causing harm of a violent or traumatizing nature?"
Replace "AI" with "humans" and you've got "laws". The current legal system is working... kind of? But it needs a lot of work before it can run without human intervention entirely.
So yeah, some of your ideas are "yes, and it's a field of study" and some are "no because AI is not humans".
If you can rely on honesty of people, you could add a checkbox question "I wanted to see results" and get 0-1 out of that one, allowing you to calculate what the real average should have been.
(I've mostly only skimmed.)
It can be hard to find good content in the diaspora. Possible solution: Weekly "diaspora roundup" posts to Less Wrong. I'm too busy to do this, but anyone else is more than welcome to (assuming both people reading LW and people in the diaspora want it).
This is what /r/RationalistDiaspora was intended to do. It never really got traction, and is basically dead now, but it still strikes me as a good solution. If that's not going to revive though, I agree that a weekly thread on LW is worth trying. By default, I'll make one later this week. (I'm not currently sure I'll have anything to post in it myself, I'll be asking people to post links in the comments.)
Go tell Scott Alexander you'll build an online forum to his specification, with SSC community feedback, to provide a better solution for his overflowing open threads.
He tried to move people to /r/SlateStarCodex, but that didn't work. We'd want to understand why. (Some hypotheses: it wasn't actually on SSC, where people go directly; posts there don't pop up in their RSS readers; people have an aversion to comment systems with voting; people have an aversion to reddit specifically.)
As Scott features more and more posts, he gains a moderation team full of people who wrote posts that were good enough to feature.
I'm not sure that "writes good posts" and "would make a good moderator" are sufficiently correlated for this to work. A lot of people like Eliezer's writing but dislike his approach to moderation.
(On the other hand: maybe, if we want Eliezers to stick around, we need them to be able to shape the community? Even if that means upsetting people who don't write much.)
It also creates weird incentives, like: "I liked this post that was highly critical of our community, but I don't want the author to be a mod". (This is the problem that Scott Aa points to of "this system can only improve on ordinary democracy if the trust network has some other purpose" - I worry that voting-for-comment-scores isn't a sufficiently strong purpose to outweigh voting-for-moderators.)
Another system to consider would be to do it based on the way people administer votes, not the way they remove them. If your votes tend to correlate with others', they have more weight in future. If posts you flag tend to get removed, your flags count for more. (I'm not convinced that this works either.)
If posts you flag tend to get removed, your flags count for more. (I'm not convinced that this works either.)
StackExchange uses a flag weight model. They removed it from the visible section of the profile (http://meta.stackexchange.com/questions/119715/what-happened-to-flag-weight) but I think they still use it internally.
Hmm, I didn't intend for the prophet to contradict himself. (Based on your comments and others, I seem to have tripped and fallen hard into the illusion of transparency.) Would you mind elaborating on the contradictory statement he makes? And, had he not said anything apparently contradictory, then would you have paid $100?
We've talked about this before, right? He claims there is a something that you cannot prevent in your future, like ever ever. Like "even if we chain you to the wall in a dungeon all locked up with locks"-ever. I don't know what happens if you suicide in the example; I guess the point is moot in that case.
But! He has THE CURE! It only has a 50% chance of working, and if you pay $100, he will give you the cure.
I detect a slight personal bias here; I treat Omega as "an entity" and this prophet as "a person, who may or may not be out to scam me"... but whatever, supposed to override that.
We'll assume THE CURE is informational in nature, because if it was physical, there exist futures in which I slam the prophet to the ground and just take THE CURE without paying $100. Or I convince the prophet that, look, I don't have $100 on me right now, how about I give you just $50 for it? To which the prophet responds "it's okay, you can pay in installments". (This is why "Omega" solves a lot of problems, when you agree to pay $100, he will wire it for you. This prophet can't plausibly have hacked the worlds banking systems, Omega can. The prophet cannot withhold your income, Omega can.)
At that point... well, yes, I'd pay by axiom. The prophet is trustworthy. Like, 100% trustworthy. That's the whole premise. Omega isn't, but if you want proof, he'll generate it for you.
I could be wrong about this (one part of me says "Yes, you're wrong! Wrong wrong wrong wrong!" and the other says "Nah, this is fine"), but I think you'd be more likely to run into Omega-like entities than truly honest prophets, because we've had SO MANY people already claiming to be able to see the future (and failing; how many times has the world been supposed to end already?) that when a man comes up to me and says such a thing, the first I think is "scammer".
More points (not all of them fair and most of them arguing against the example which is normally not the goal); Omega can generate proof so fast that if I wanted to know if it was likely that he did what he did, I could learn such proof in a reasonable amount of time. Maybe he's been playing this game with other people. Maybe he made a pledge somewhere public on a blockchain somewhere. He could help me out; I'd be willing to engage him in detail if he could arrange for a faster transport home than the bus. The prophet is someone I'd encounter somewhere on the streets. When I'm outside, I am going somewhere. Rarely am I waiting. When I'm outside, I don't have $100 on me (nor the equivalent in euros). So when a man walks up to me and starts a story like this my first reaction is "I don't have time for this". "But wait! There is a great doom lurking in your future!" Yeah, right... - I'm supposed to trust this person by axiom, but this is so opposite of my regular reaction that I'm bound by the scenario anyway. "If I wasn't me, I wouldn't act like me." Right. I knew that already.
Well, there's a whole lot of words; basically what you're encountering here is a mix of "all powerful AI can fix problems if they occur, thus there are no problems if you could think of a solution yourself" and "my prior for people walking up to me telling me there is some reason that I should give them money that doesn't involve unpaid bills is to ignore them".
We can reframe the situation, though; Maybe I just missed my bus and have to wait 15 minutes before the next one shows up, and then the prophet comes up to me. Now I have time. Maybe the prophet is my friend, who I've always known to be pretty rational. Maybe the prophet is my dad or my mom.
Here's a version you might enjoy more:
You take your car to the mechanics for the yearly checkup. He calls you up and says "mate, there's a slight problem with your car. There's some corrosion on the fuel valve according to this sensor - its not a big problem, and your car passes the yearly inspection, so it's cleared for the road, but I worry about this - if that corrosion continues then bits of the valve could end up in the fuel mixture, throughly wrecking your engine whilst you're driving, and you'd have an accident for sure. We can fix it, no problem, but we'd have to add $100 to the bill for parts and hours worked."
Would you pay?
Maybe you don't have a car. Maybe you think selling the car off and buying a new one is better. Maybe you'd get a second opinion first. But there's plenty people I can think of that would say "sure, do it" - I mean, if the dealership you've been at for a few years tells you something like this, then, well, I'd feel unsafe driving it on the highway.
Other versions of this same problem: A minor crack in your wall. Mom tells you "you should really get that looked at, you know. You know that old couple two streets over? They have a minor crack too, and later during the summer it had torn the entire wall in two due the ground drying out" (or something). Fees for a building inspector are $100... do you ignore or pay?
These scenarios differ in that you can gather intel about the likelihood of the future bad event in greater detail (albeit you might have to spend something for that as well - letting your uncle who works in construction come by for coffee and a short look at that crack costs you at least a coffee and some time, and googling for "corroded fuel valve scam" also takes you a bunch of time).
And yet I'd totally do that. I'd call up my uncle and have him take a look. I'd let them fix my car. But that prophet of yours is not giving me any details. He's engaging in fear mongering. There's a lot of fear mongering in the world already and not all of it is true. So my prior for paying people based on fear mongering is lower than my prior for paying people who tell me my car might break down...
There's a problem with these scenarios, though; if you take a careful look and play with the numbers, you'll see that the chance of having to pay $10000 is not 100% certain - maybe the car will be fine, maybe the wall will be fine. And if you pay, it's either fixed or you'll know if it is a problem.
So enter the prophet.
You're outside of a restaurant, busy with dessert, when you get the call from the mechanic. He explains about the possible corroded fuel valve. You tell him you want to enjoy your dessert first; you'll call him up in an hour or so with your answer.
You've finished your meal - when the prophet walks up to you and says you'll DEFINITELY crash if you don't get your car fixed, but if you fix your car then there'll only be a 50% chance.
...
Yeah, sorry, but this case is scary for me too. Say what, prophet? Thanks for telling me I'm doomed to crash if I don't get it fixed, that's valuable information. But what do you mean with "50% chance"? Is there something ELSE wrong with my car? And the prophet loses credibility again. I wish I could get some answers out of this prophet so that I could trust him some more. (Bias here; I'm allowing Omega to answer questions and I'm not giving the prophet the same opportunity. This is of course a major difference, but it stems from my personal feelings where crystal ball prophecies tend to be "I've said there is such a chance so there is, no further questions allowed" and Omega to be answering things like "over how many years is that crash chance calculated?" - to which the answer would be very interesting to hear.)
(Too many words)
And if the prophet is "honest and truly prophetic"?
He makes a self-contradictory statement and loses credibility points. Like, a lot of them. Maybe not in general, but a lot of them for this specific topic.
Because after I went through what is most likely to be a pretty darn good ran workshop with lots of effort put in by actual real people that I will see in those days and to talk to them and to learn some stuff from them and then to say afterwards "sorry, but I don't think that what I learned here is THAT valuable" - to their face (and I have seen their faces, so to put it in a email is a lot like saying it to their face) - that just somehow breaks social convention for me.
There is also the possibility that I consider flying to America and spending 4 days there "scary", and that the monetary price tag is not my actual problem. To fix this, I now imagine the convention was held in Europe.
...
It's not helping, I'd still have to fly. So it's not America that is scary. What if it's in my home country, a long drive (3 hours) away?
...
I can visualize myself looking up further info to see just how long of a trip it is. I can also visualize myself talking to my parents about this. I know the money is something that will be something to talk about my parents. ...
If I think about other long trips, I know my parents will encourage me, because it makes me more independant. They'll help me pack (do you have this, do you have that?).
For the money aspect... they'd have a serious talk with me about it. The ability to refund if it a total sham would help to convince my parents. The fact that it is on a weekend helps reduce the impact as well, it's not a workweek you're taking off. Ultimately they would say it is my own money, that I am a responsible adult now, and that I am the one that should decide for myself. I'd be subject to some heavy questioning about WHY I'd want to go. Due to previous trouble with cults in the family, they'd probably ask questions in that direction - especially after looking CFAR up - a workshop to "think better"...?
I could put my foot down and say I wanted to go and they'd let me.
...
I get the feeling I should visit a meetup or some other rationality-themed event with lower entry requirements first. To get to know how those other people react and respond to things. How welcoming they are. Yes, unjust generalizations, but on the other hand, some parts of those people has to think alike (rational-..istic) and thus it is worth some points as evidence. And whether I can learn anything from talking to people like that, or whether it is a massive circlejerk, so to say.
...
Man I could get half of that by hanging around in some skype meetup. Maybe. Sounds like something that'd be worth to try, given the low effort required.
I don't think my original monetary argument is a false argument - the cost is real - but it's slightly different. $3900 is a lot of money to spend on something that you have no clue of what it'd be like. It looks the same, but basically you can say "hey, this is a car, it goes really fast" and talk to a lot of people who have driven in cars but if you want to buy a car then maybe you should try driving a car first. (This analogy fails horribly due to the fact that you tend to get a drivers license before buying a car. Which involves a lot of driving. So you'd definitely know what a car is.)
Got -1 on each post in this chain - downvoter, mind providing feedback? If I'm making some mistake here, I'd like to know.
I've been playing the 3rd release in the Zero Escape series, Zero Time Dilemma (3DS release June 28, PC release June 29). It's a mix of VN and "escape the room" in terms of genres.
In Zero Time Dilemma, 9 people are stuck in a simulated space ship, to test the effects of people flying to mars. Because the projected flight path puts the sun between the earth and mars, there will have to be radio silence for a few days. Several hours before the radio silence is lifted, the group is forced to participate in a game by an entity called 'Zero'. Each player has to wear (or rather, wakes up wearing) a bracelet, which functions as a watch and as an injector of sleeping & amnesia drugs.
Zero has a fascination with branching of time - near the start of the game he tells a story about a runner. She runs through the park every morning. At a particular branch, she usually always goes right. However, one day, she goes left. On the left path, she comes across an old man who she sees often when running in the mornings. The old man asks her "hey, this is different from your usual route. Why did you go left?" The woman answers "because there was a snail." Later that day, police finds the woman dead in the bushes of the left path. "Isn't it curious, how one snail can affect a life so much?"
Zero forces the players to participate in "Decision Games". The players are told that...
In other words... 6 people must die.
The players are split up into 3 teams of 3 members each (for maximum tribalism?) and then are told of their first decision game - a screen turns on, showing the names of the other teams. Each team is to vote which other team should be executed. If a team has 2 or more votes, it is executed. If a team does not vote, it gains 2 votes against it.
So the game starts off with a version of prisoners dilemma. Through plot an option (A votes for B, B votes for C, C votes for A) is suggested and passed along a side channel, turning it into a true problem rather than a random guess.
Pretty much every "Decision Game" is a cruel game of ethics - where failure is met with death, and success with continued survival.
The game is cut up into 90 minute segments - after these 90 minutes are up, players are put to sleep and have their memory wiped. It's sad that none of the players seems to try and game this gimmick (not that I've seen yet, but I'm only at 20% completion or so) by writing stuff on their hands or something. But what makes this interesting is that the game plays out in time-fragments, some on one timeline, some on another.
I've pretty much been on an adrenaline high whilst playing this game - whilst you should play the earlier releases in the series if you want to connect all the plot threads (and there are a LOT), you could get away without playing them. There's enough exposition to explain plot points covered in the earlier releases, although it sometimes feels like a noodle incident ("I've seen this before, it's just like that one time with the rabbits").
If you're still unsure, I'd recommend downloading an DS emulator and a rom for 9 hours 9 persons 9 doors - it's the first release in the series and has similar gameplay, although without the traveling to story fragments.
To me, it's an interesting game where my ethics are put to the test - it's easy to say "shut up and multiply", it's another to be faced with the choice for real.
Now that I've finished the game I can say that it's quite a blast - far more situations commonly discussed on LW are featured. I can't really say any more for the risk of spoiling a game which relies so much on story... My total playtime clocked in at 25 hours.
I have to say that the most annoying part of the game was that the sound kept cutting out; I frequently (I think a total of 50-100 times) had to go to the options menu and tap the BGM volume to get the background music to play again. More annoyingly, voices would sometimes also cut out, and the game doesn't continue until the line has been spoken, so it's required to enter the options menu and tap the voice volume to fix things. I think I had to tap the voice volume about 20 times in total. This did detract from my experience, but I still think the 37 euros I spent on it were worth it for the 24 hours of play.
The puzzles were of decent difficulty; I only googled for help once, and that turned out to be me just plain forgetting about an item.