Comment author: Merlin92 23 July 2015 11:37:59AM 1 point [-]

This meeting is scheduled for 02:00. Is that correct?

Comment author: Zubon 30 July 2015 04:09:29PM 0 points [-]

Time corrected to PM. Thanks.

Comment author: solipsist 23 July 2015 04:00:34AM *  9 points [-]

I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.

Dilettanting:

  • It is really really hard to produce code without bugs. (I don't know a good analogy for writing code without bugs -- writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
  • The market doesn't support secure software. The expensive part isn't writing the software -- it's inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It's a market for lemons.
  • Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply -- there isn't much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there's an entire secret ecosystem of secured software out there, "secure" systems are using a stack with insecure, consumer, components.
  • Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you're a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
  • Edward Snowden was, like, just some guy. He wasn't trained by the KGB. He didn't have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone "hey everyone I just stole thousand of secret documents". Most thieves do not work that way.
  • Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Comment author: Zubon 23 July 2015 09:22:27PM 6 points [-]

Additional note to #3: humans are often the weakest part of your security. If I want to get into a system, all I need to do is convince someone to give me a password, share their access, etc. That also means your system is not only as insecure as your most insecure piece of hardware/software but also as your most insecure user (with relevant privileges). One person who can be convinced that I am from their IT department, and I am in.

Additional note to #4: but if I am willing to forego those benefits in favor of the ones I just mentioned, the human element of security becomes even weaker. If I am holding food in my hands and walking towards the door around start time, someone will hold the door for me. Great, I am in. Drop it off, look like I belong for a minute, find a cubicle with passwords on a sticky note. 5 minutes and I now have logins.

The stronger your technological security, the weaker the human element tends to become. Tell people to use a 12-character pseudorandom password with an upper case, a lower case, a number, and a special character, never re-use, change every 90 days, and use a different password for every system? No one remembers that, and your chance of the password stickynote rises towards 100%.

Assume all the technological problems were solved, and you still have insecure systems go long as anyone can use them.

Meetup : Ann Arbor meetup

1 Zubon 19 July 2015 04:41PM

Discussion article for the meetup : Ann Arbor meetup

WHEN: 22 August 2015 02:00:00PM (-0400)

WHERE: Pizza House, 618 Church Street, Ann Arbor, Mich. 48104

Our bi-monthly discussion group. There is no minimum to attend in terms of age, reading history, karma, or otherwise. If you can read this, you are welcome. You do not need to have read the Sequences or HPMoR. You do not need a degree. You do not need to be extroverted, neurotypical, or even think of yourself a Less Wrong person.

Accommodations:

  • We will likely be seated upstairs. I presume that they have an elevator, but I have not checked. If mobility is an issue for you, please send me a message, and I can make sure you are accommodated.

  • I will bring the stim toys collection again. Feel free to bring your own if that keeps your hands happy. I don't use stim toys, so also feel free to critique the ones I ordered for meetups.

  • I will be ordering food to share again, so if cost is an issue, there will be feta bread and vegetarian pizza available. You are also free to order on your own or not eat. Even if cost is not an issue, please consider having some of the shared food so that no one feels self-conscious about it. If cost is an issue and you are vegan, please look at the menu and send me a message.

  • Other accommodations needed? Please comment or message me.

We have two factors that might move the date: students' return and visiting guests. Guests are uncertain but possible; our last Pizza House event was scheduled around Scott Aaronson's visit. Students' return could be a positive or a negative: good if you are a student who wants to make it, bad if we are trying to hold an event near campus during move-in weekend. Undergraduate move-in starts September 4, so we are shooting a bit before that right now.

Discussion article for the meetup : Ann Arbor meetup

Comment author: Zubon 19 July 2015 02:11:37PM *  6 points [-]

Now it is a strange thing, but things that are good to have and days that are good to spend are soon told about, and not much to listen to; while things that are uncomfortable, palpitating, and even gruesome, may make a good tale, and take a deal of telling anyway.

― J.R.R. Tolkien explains how we get problems with the availability heuristic in The Hobbit

Comment author: Zubon 17 July 2015 02:42:31PM 4 points [-]

Most people are neurologically programmed so they cannot truly internalize the scope and import of deeply significant, long run, very good news. That means we spend too much time on small tasks and the short run. Clearing away a paper clip makes us, in relative terms, too happy in the short run, relative to the successful conclusion of World War II.

-- Tyler Cowen

Comment author: TheAncientGeek 16 July 2015 09:24:33AM 1 point [-]

Do you think there is a right answer to the Trolley problem?

Comment author: Zubon 16 July 2015 03:36:53PM 0 points [-]

Yes: what we learn from trolley problems is that human moral intuitions are absolute crap (technical term). Starting with even the simplest trolley problems, you find that many people have very strong but inconsistent moral intuitions. Others immediately go to a blue screen when presented with a moral problem with any causal complexity. The answer is that trolley problems are primarily system diagnostic tools that identify corrupt software behaving inconsistently.

Back to the object level, the right answer is dependent on other assumptions. Unless someone wants to have claimed to have solved all meta-ethical problems and have the right ethical system, "a right answer" is the correct framing rather than "the right answer," because the answer is only right in a given ethical framework. Almost any consequentialist system will output "save the most lives/QALYs."

Comment author: Zubon 15 July 2015 02:08:18AM *  0 points [-]

Any games you're looking forward to? I'm curious about Pathfinder card game (new stuff coming, never played original), City of Gears, and Die! I was using the BoardGameGeek Origins preview to scout new releases.

ETA: Gen Con preview live

Meetup : Gen Con: Applied Game Theory

2 Zubon 15 July 2015 12:10AM

Discussion article for the meetup : Gen Con: Applied Game Theory

WHEN: 01 August 2015 02:00:00PM (-0400)

WHERE: Indianapolis Convention Center

Tentative location: Hall F, the green tables just behind the CCG/TCG HQ. They say that space is generally available for open gaming, and failing that, the blue tables just across from HQ would be. If we have managed to pick peak CCG time, we will wait until 2:15 to gather folks and relocate (and will comment here to that effect). That spot is also just outside the exhibit hall, in case you decide you must own your own copy of whatever we play. Please bring your newest game or old favorite.

Meet up with Less Wrong friends and play games! Learn the newest releases, play classics, and otherwise have fun. A purely social event, running for as long as people want to stay and play, with potential continued discussion over dinner.

As at all events, there is no minimum degree, IQ, reading record, height, age, or neurotypicality to participate. Bring games if you like, learn new games if you like, and expect a range from "I am here for the championships" to "what are we Settling?"

We seemed like the kind of nerds who might go to Gen Con. You can also use the comments here to find others attending, arrange other connections at the event, discuss the con, etc.

Discussion article for the meetup : Gen Con: Applied Game Theory

Comment author: Zubon 09 July 2015 03:52:18AM 3 points [-]

And when your surpassing creations find the answers you asked for, you can't understand their analysis and you can't verify their answers. You have to take their word on faith —-

—- Or you use information theory to flatten it for you, to squash the tesseract into two dimensions and the Klein bottle into three, to simplify reality and pray to whatever Gods survived the millennium that your honorable twisting of the truth hasn't ruptured any of its load-bearing pylons. ...

I've never convinced myself that we made the right choice. I can cite the usual justifications in my sleep, talk endlessly about the rotational topology of information and the irrelevance of semantic comprehension. But after all the words, I'm still not sure. I don't know if anyone else is, either. Maybe it's just some grand consensual con, marks and players all in league. We won't admit that our creations are beyond us...

Maybe the Singularity happened years ago. We just don't want to admit we were left behind.

-- Siri Keeton explains what a "synthesist" does in Blindsight by Peter Watts, page 35-37

Blindsight is an amazingly Less Wrong book, with much discussion of epistemology and cognitive failures, starting with the title of the book. It is some of the hardest science fiction in existence, with a 22-page "Notes and References" section walking through 144 citations for the underlying science.

Pushing a related quote to a comment... Pushing discussion to another comment...

Comment author: Zubon 09 July 2015 03:53:01AM 0 points [-]

This being Less Wrong, this might be the point where you bring up whether P=NP and that solutions are often much easier to verify than compute. Easier does not necessarily mean easy or even within human cognitive capabilities. And if it does in whatever example comes to mind, just keep pushing to harder problems until we need not only tools to solve the problem but also meta-tools to tell us what our tools are telling us. And you can keep pushing that meta. (Did I mention that Blindsight is a very Less Wrong book?)

We trust our tools because we trust the process we used to develop our tools, and we trust the previous generation of tools used to develop those tools and processes, and we trust... At some point, you look at the edifice of knowledge and realize your life depends on a lot of interdependencies, and that can be scary.

And then I trust Google Maps to get me most places, because I know it has a much better direction sense than me and it knows things like construction and traffic conditions.

Comment author: Zubon 09 July 2015 03:52:18AM 3 points [-]

And when your surpassing creations find the answers you asked for, you can't understand their analysis and you can't verify their answers. You have to take their word on faith —-

—- Or you use information theory to flatten it for you, to squash the tesseract into two dimensions and the Klein bottle into three, to simplify reality and pray to whatever Gods survived the millennium that your honorable twisting of the truth hasn't ruptured any of its load-bearing pylons. ...

I've never convinced myself that we made the right choice. I can cite the usual justifications in my sleep, talk endlessly about the rotational topology of information and the irrelevance of semantic comprehension. But after all the words, I'm still not sure. I don't know if anyone else is, either. Maybe it's just some grand consensual con, marks and players all in league. We won't admit that our creations are beyond us...

Maybe the Singularity happened years ago. We just don't want to admit we were left behind.

-- Siri Keeton explains what a "synthesist" does in Blindsight by Peter Watts, page 35-37

Blindsight is an amazingly Less Wrong book, with much discussion of epistemology and cognitive failures, starting with the title of the book. It is some of the hardest science fiction in existence, with a 22-page "Notes and References" section walking through 144 citations for the underlying science.

Pushing a related quote to a comment... Pushing discussion to another comment...

Comment author: Zubon 09 July 2015 03:52:33AM 1 point [-]

"If you could second-guess a vampire, you wouldn't need a vampire."

-- an aphorism in Blindsight by Peter Watts, page 227

In Blindsight, a "vampire" is a predatory, sociopathic genius built through genetic engineering. They have human brain mass but use it differently; take all the brain power we spend on self-awareness and channel it towards more processing power. The mission leader in Blindsight is a vampire, because he is more intelligent and able to make dispassionate decisions, but how do you check whether your vampire is right or even still on your side? Like Quirrelmort, they are always playing at least one level higher than you.

The synthesist quote is the first time Blindsight brings up the problem of what to do when you build smarter-than-human AI. The vampire quote approaches it from a different angle, with a smarter-than-human biological AI. Vampires present a trade-off: they cannot rewrite their source code, so they cannot have a hard takeoff, but you know they are less than friendly AI.

(If you know what is wrong with the above, please ROT13 your spoilers.)

View more: Prev | Next