All of JoshuaMyer's Comments + Replies

An interesting response. I did not mean to imply that the feeling had implicit value, but rather that my discomfort interacted with a set of preexisting conditions in me and triggered many associated thoughts to arise.

I'm not familiar with this specific philosophy; are you suggesting I might benefit from this or would be interested in it from an academic perspective? Both perhaps?

Do you have any thoughts on the rest of the three page article? I'm beginning to feel like I brought an elephant into the room that no one wants to comment on.

I think I must have explained myself poorly ... you don't have to take my subjective experience or my observations as proof of anything on the subject of parables or on cognition. I agree that double entendre can make complex arguments less defensible, but would caution that it may never be completely eliminated from natural language because of the way discourse communities are believed to function.

Specifically, what subject contains many claims for which there is little proof? Are we talking now about literary analysis?

If you also mean to refer to the man... (read more)

Thank you for your feedback. I am not sure what I think, but the general response so far seems to support the notion that I have tried to adapt the structure to a rhetorical position poorly suited for my writing style. I'm hearing a lot of "stream of consciousness" ... the first section specifically might require more argumentation regarding effective rhetorical structures. I attack parables without offering a replacement, which is at best rude but potentially deconstructive past the point of utility. I'm currently working on an introduction that might help generate more discussion based on content.

I have added a short introductory abstract to clarify my intended purpose in writing. Hopefully it helps.

That alone is not an obstacle necessarily. We must establish what these views have in common and how they differ in structure and content.

Also, I'd like to steer away from a debate on the question of whether "deep parables" exist. Let's ask directly, "are the parables here on LW deep?" Are they effective?

1ChristianKl
LW is quite diverse. There are a lot of different people with different views.

I've read both. Paul Graham's style is wonderful ... so long as he keeps himself from reducing all of history to a triangular diagram. I prefer Stanley Fish for clarity on linguistics.

Why is it difficult to talk about parables directly? We have the word and the abstract concept. Seems like a good start.

I feel like you've pointed out what is at least a genuine inconsistency in purpose. The point of this article was not meant to subvert any discussion of economic rationality but rather to focus discussions of intelligence on more universally acceptable models of cognition.

0ChristianKl
Because most people think that when they read an article they either agree or disagree and that's pretty clear the moment they read the article. The idea that the article contains a parable that creates cognitive change with a time lack of a day, week, month or year isn't in the common understanding of cognition. It's not a phenomena that's well studied. That means there a lot of claims on the subject for which people would want proof but no scientific studies to back up those claims. I just read Dune and it contains the description of a character: Speaking in that way where phrases generally have more than one meaning is not easy when you try to make complex arguments that are defensible.

I give several reasons in the text as to why biases are necessary. Essentially, all generative cognitive processes are "biased" if we accept the LW concept of bias as an absolute. Here is an illustrated version -- it seems you aren't the only one uncertain as to how I warrant the claim that bias is necessary. I should have put more argument in the conclusion, and, if this is the consensus, I will edit in the following to amend the essay.

To clarify, there was a time in your life before you were probably even aware of cognition during which the pro... (read more)

You are correct. Reciprocal altruism is an ideal not necessarily implementable and I should have written, "As far as the spirit of reciprocal altruism should dictate". :-)

It has nothing to do with my article but you've made me very happy by explaining this to me. I think I understand better what is meant by "encoding". Also the bit about regardless I found quite witty and even laughed out loud (xkcd.com kept me informed about the OED's decision on that word).

So the encoding was probably not the problem then because most programs default ANSI and it was not the unanimous first suggestion from everyone to switch to 7 bit encoding ... although I do understand why ASCII is more universal now. Open questions in my mind now include: does the GUI read ASCII and ANSI? And what encoding is used for copy and pasting text?

0gjm
The main problem was most likely that your text was full of nonbreaking spaces. A conversion to actual ASCII would have got rid of those because the (rather limited) ASCII character repertoire doesn't include nonbreaking spaces. I doubt that using an "ANSI" character set did that, though, so yes, the encoding was probably a red herring. What GUI? That would be an implementation detail of your operating system; if it's competently implemented (which I think pretty much everything is these days) you should think of what's copied and pasted as being made up of characters, not of the numbers used to encode them. However, at least on some systems, if you copy from one application that supports (not just plain text but) formatted text into another, the formatting will be (at least roughly) preserved. This will happen, e.g., if you copy and paste from a web browser into Microsoft Word. I find that this is scarcely ever what I want. There's usually a way to paste in just the text (sometimes categorized as "Paste Special", which may offer other less-common options for pasting stuff too).

So, if I understand the implication, anything encoded in ANSI is not universally machine readable (there are several unfamiliar terms for me here "anglophone" "ISO 8859-1" and "Windows codepage 1252")? I probably won't look up all the details, because I rarely need to know how many bits a method of encryption involves (I'm probably betraying my naivety here) irregardless of the character set used, but I appreciate how solid of a handle you seem to have on the subject.

1gjm
English-speaking. An 8-bit character set (i.e., representing 256 different characters) suitable for many Western European languages. Something very much like ISO-8859-1 but slightly different, used on computers running Microsoft Windows. It's slightly different because for some reason (there are more and less cynical explanations) Microsoft seem unable to use anything standardized without modifying it a little. Microsoft-Windows-ese for "an 8-bit character set whose first half is the same as ASCII". Specifying the second half is the job of a "code page", such as the "code page 1252" mentioned above. Not machine-readable without knowledge of which "code page" (see above) it uses. If you know that, or can guess it, you're OK. Not actually encryption, despite the term "encoding". A character encoding is a way of representing characters as smallish numbers suitable for storing in a computer. Strictly speaking, every time I said "character set" above I should have said "encoding". Every time you have any text on a computer, it's represented internally via some encoding. Common encodings include ASCII (7 bits so 128 characters, but actually some of those 128 slots are reserved for things that aren't really characters), ISO-8859-1 (8 bits, suitable for much Western European text, though actually nowadays the slightly different ISO-8859-15 is preferred because it includes the Euro currency symbol), UTF-8 (variable length, from 8 to 24 bits per character, represents the whole -- very large -- Unicode character repertoire). For most purposes UTF-8 is a good bet. Regardless. (Sorry.) [EDITED to answer the question about "not universally machine readable".]
0JoshuaMyer
Either way, I owe you.

I tried really hard to imitate and blend the structure of argumentation employed by the most successful articles here. I found that in spite of the high minded academic style of writing, structures tended to be overwhelmingly narratives split into three segments that vary greatly in content and structure (the first always establishes tone and subject, the second contains the bulk of the argumentation and the third is an often incomplete analysis of impacts the argument may have on some hypothetical future state). I can think of a lot of different ways of o... (read more)

3John_Maxwell
Agreed that your post is impressively mindful. In terms of writing style, maybe try writing more like Steven Pinker or Paul Graham. (If you've haven't read Paul Graham yet, the low-hanging fruit here is to go to his essays page and read a few essays that appeal to you, then copy that style as closely as possible. Here are some favorites of mine. Paul Graham is great at both writing and thinking so you'll do triple duty learning about writing, thinking, and also whatever idea he's trying to communicate.)
4Gunnar_Zarncke
Interesting. I understand how you arrived at that. The sequences and esp. EYs posts are often written in that style. But you don't need to write that way (actually I don't think you succeeded at that). My first tries were also somewhat trying to fit in but overdoing it - and somewhat failing too. Good luck. TRrying and failing is better than not trying and thus not learning. http://lesswrong.com/lw/dg7/what_have_you_recently_tried_and_failed_at/

ANSI works if I turn off word wrap and put the space between paragraphs, as you suggested. Thanks again Lumifer.

You are officially my hero Lumifer. Thank you so much.

HURRAY! Thank you everyone who helped me format this! As far as reciprocal altruism should dictate, Lumifer, I owe you.

3Kawoomba
It's debatable whether "reciprocal altruism" isn't a contradiction in terms, and whether "quid pro quo" wouldn't be the more accurate descriptor for what is in essence "you scratched my back, so I'll scratch yours". Then again, I may just be griping because you made me look up Hegelianism in your other comment.

okay I did that and am about to paste.

1JoshuaMyer
You are officially my hero Lumifer. Thank you so much.

Thanks so much. The formatting is now officially fixed thanks to feedback from the community. I appreciate what you did here none the less.

good to know. I've used Openoffice in the past and am regretting not using it on this computer. At least I'm learning :-)

Wow. My encoding options are limited to two Unicode variants, ANSI and UTF-8. Will any of those work for these purposes?

0Lumifer
For future reference, ANSI is not Unicode. You can google up the gory details if interested, but basically ASCII is a seven-bit character set with 128 symbols. The so-called ANSI (it's a misnomer) extends ASCII to 8 bits and so another 128 symbols, but without specifying what these symbols should be. On most Anglophone computers these will correspond to ISO 8859-1 (or a very similar Windows codepage 1252), but in other parts of the world they will correspond to whatever the local codepage is and it can be anything it wants to be. UTF-8, on the other hand, is proper Unicode. So it seems the closest you can get to plain ASCII is to use ANSI.
0JoshuaMyer
ANSI works if I turn off word wrap and put the space between paragraphs, as you suggested. Thanks again Lumifer.

Thank you. I will try this and see if it helps with the paragraph double spacing problem.

0JoshuaMyer
Wow. My encoding options are limited to two Unicode variants, ANSI and UTF-8. Will any of those work for these purposes?

OK so this is marginally better. Found notepad and copied and pasted after turning on word wrap. will continue to tweak until the pagination is not obnoxiously bad.

5Lumifer
Turn OFF word wrap. You should also not be concerned with pagination at all. Separate paragraphs by an empty line.

I seem to be in the process of crashing my computer. I hope to have resolved this issue in approximately 10 minutes.

[This comment is no longer endorsed by its author]Reply

I know. I'm trouble shooting now :-)

[This comment is no longer endorsed by its author]Reply

I will try this after I try the above suggestion. Thank you also.

I will try this. Thank you for being constructive in spite of the mess.

GUI ... graphical user interface ... as in the one this website uses.

This is what happens as a result of my copy and pasting from the document. I have tried several different file formats ... this was .txt which is fairly universally readable ... I ran into the problem with the default file format in Kingsoft reader as well.

1Lumifer
What you want right now is a plain-ASCII text file. No Unicode, no HTML, no nothing.

I will remove this as soon as I have been directed to the appropriate channels, I promise it's intelligent and well written ... I just can't seem to narrow down where the problem is and what I can do to fix it.

[This comment is no longer endorsed by its author]Reply

I don't know how to fix this article ... every time I copy and paste I end up with the format all messed up and the above is the resulting mess. I'm using a freeware program called Kingsoft Writer, and would really appreciate any instruction on what I might do to get this into a readable format. Help me please.

[This comment is no longer endorsed by its author]Reply
0Punoxysm
I recommend Openoffice/libreoffice, or google docs
1Lumifer
You're outputting ' ' (a hard space) instead of the normal space character to start with. You're also setting margins within your text. Why don't you export your document to text, open and save it in a text editor (e.g. Notepad) and then post it?

I came to the conclusion that I needed more quantitative data about the ecosystem. Sure birds covered in oil look sad, but would a massive loss of biodiversity on THIS beach effect the entire ecosystem? The real question I had in this thought experiment was "how should I prevent this from happening in the future?" Perhaps nationalizing oil drilling platforms would allow governments to better regulate the potentially hazardous practice. There is a game going on whereby some players are motivated by the profit incentive and others are motivated by... (read more)

I would argue that without positive reinforcement to shape our attitudes the pursuit of power and the pursuit of morality would be indistinguishable on both a biological and cognitive level. Choices we make for any reason are justified on a bio-mechanical level with or without the blessing of evolutionary imperatives; from this perspective, corruption becomes a term that may require some clarification. This article suggests that corruption might be defined as the misappropriation of shared resources for personal gain; I like this definition, but I'm not su... (read more)

What a wonderfully compact analysis. I'll have to check out The Jagged Orbit.

As for an AI promoting an organization's interests over the interests of humanity -- I consider it likely that our conversations won't be able to prevent this from happening. But it certainly seems important enough that discussion is warranted.

My goodness ... I didn't mean to write a book.

You have a point there, but by narrow AI, I mean to describe any technology designed to perform a single task that can improve over time without human input or alteration. This could include a very realistic chatbot, a diagnostic aide program that updates itself by reading thousands of journals an hour, even a rice cooker that uses fuzzy logic to figure out when to power down the heating coil ... heck a pair of shoes that needs to be broken in for optimal comfort might even fit the definition. These are not intelligent AIs in that they do not adapt to othe... (read more)

2NancyLebovitz
An AI might do a reasonable thing to pursue a reasonable goal, but be wrong. That's the sort of thing you'd expect a human to do now and then, and an AI might be less likely to do that than a human. Considering the amount of force an AI can apply, we should probably be more worried than we are about AIs which are just plain making mistakes. However, the big concern here is that an AI can go wrong because humans try to specify a goal for it, but don't think it through adequately. For example (and hardly the worst), the AI is protecting humans, but human is defined so narrowly that just about any attempt at self-improvement is frustrated. Or (and I consider this a very likely failure mode), the AI is developed by an organization and the goal is to improve the profit and/or power of the organization. This doesn't even need to be your least favorite organization for things to go very wrong. If you'd like a fictional handling of the problem, try The Jagged Orbit by John Brunner.
0JoshuaMyer
My goodness ... I didn't mean to write a book.

Very thoughtful response. Thank you for taking the time to respond even though its clear that I am painfully new to some of the concepts here.

Why on earth would anyone build any "'tangible object' maximizer"? That seems particularly foolish.

AI boxing ... fantastic. I agree. A narrow AI would not need a box. Are there any tasks an AGI can do that a narrow AI cannot?

3[anonymous]
If there is no task that a narrow AI can't do, then I'm not sure what you mean by "narrow" AI. A general AI is able to take any physically possible sequence of actions in order to accomplish its goal in unfamiliar environments. Generally that includes things a narrow AI would not be programmed to do. One of the things an AGI can do is be set loose upon the world to accomplish some goal for perpetuity. That's what gets people here excited or scared about the prospects of AGI.
2ChristianKl
Stock market computer programs are created in a way to maximize profits. In many domains computer programs are used to maximize some variable. What do you mean with "narrow"?

But wouldn't it be awesome if we came up with an effective way to research it?

0[anonymous]
Yes, I was referring to LessWrong, not AI researchers in general.

I don't know what a paperclip maximizer is, so I imagine something terrible and fearsome.

My opinion is that a truly massively intelligent, adaptive and unfriendly AI would require a very specific test environment, wherein it was not allowed the ability to directly influence anything outside a boundary. This kind of environment does not seem impossible to design -- if machine intelligence consists of predicting and planning the protocols may already exist (I can imagine them in very specific detail). If intelligence requires experimentation, than limiting ... (read more)

5[anonymous]
Google is your friend here. It's well discussed on and outside of lesswrong. The search term here is "AI boxing" and it is not as simple as you think, nor as impossible as people here seem to think. In my opinion it's probably the safest path forward, but still a monumentally complex undertaking. By being willing to engage in discussions about AGI design, thereby encouraging actual AI programmers to participate.

They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades.

I'm fairly new here and would appreciate a brief informal survey of these tropes. Our brilliance aside, to predict which ideas will be new to you from context clues seems silly when you might be able to provide guidance.

Interesting to me, a friend who attempted to write a program capable of verifying mathematical proofs (all of them -- a tad ambitious) said he ran into the exact same problem with

not knowing a good way to model relative computational capacity.

Thank you. Not entirely convinced, but at least I'm distracted for now by not knowing enough astrophysics. :-)

Example infers more than one representation could exist, which for an object this large would be absurd.

I don't doubt that just about anything can be formalized in ZFC or some extension of it. I am aware that a Turing machine can print any recursively axiomatizable theory.

all sets of axioms are countable, because they are subsets of the set of all finite strings

The set of all finite strings is clearly order-able. Anything constructed from subsets of this set is countable in that it has cardinality aleph_1 or less (even if it contains the set).

I read this book on something called language theory (I think it's now called "formal language theory")... (read more)

1cousin_it
Sorry, I don't understand what you're talking about. Can you give an example of a theory with uncountably many axioms?

Why?

Anything massive traveling between stars would almost certainly be either very slow turning, constantly in search of fuel, or unconstrained by widely accepted (though possibly non-immutable) physical limitations ... Would we be a fuel source? Perhaps we would represent a chance to learn about life, something we believe to be a relatively rare phenomena ... There's just not enough information to say why an entity would seek us out without assuming something about its nature ... intelligence wants to be seen? To reformat the universe to suit it's needs? ... (read more)

2Stuart_Armstrong
We argue that travelling between galaxies - let alone between stars is very "easy", for some values of "easy". See http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf or https://www.youtube.com/watch?v=zQTfuI-9jIo&list=UU_qqMD08PFrDfPREoBEL6IQ Major cosmical restructuring would be trivial (under the assumptions we made) for any star-spanning civilization.

Something which cannot be observed and tested lays beyond the realm of science - so how big a signal are we looking for? A pattern in quasar flashes perhaps? Maybe the existence of unexplained engineering feats from civilizations long dead? The idea that advanced technology would want us to observe it, the existence of vague entities with properties yet to be determined ... these exist as speculations. To attempt to discern a reason for the absence of evidence on these matters is even more speculative.

Perhaps I should clarify: none of the data discussed re... (read more)

2Stuart_Armstrong
I'm more thinking of mega engineering projects, the reforming of galaxies to suit the needs of a civilization, rather that the messy randomness and waste of negentropy that we seem to see. I'm not assuming that advanced technology would want us to observe it - I'm assuming that advanced technology has no reason to stay hidden from us, at tremendous opportunity costs for it.

If you are truly concerned with this, why not subscribe to the Gerhard Goentz line of argumentation? Transfinite induction makes good sense to me.

we know that a consistent theory can't assert its own consistency.

Godel is only interested in countably axiomatizable theories of mathematics (theories that can be constructed from countable sets of axioms). I would argue his conclusions only apply to some well-formed axiomatic theories.

2cousin_it
I think you're imagining an exit that isn't there. For example, ZFC can formalize Gentzen's proof of consistency of PA, and supports quite a bit of transfinite induction. Yet it still has to obey Gödel's theorem, like any other recursively axiomatizable theory. (Also, all sets of axioms are countable, because they are subsets of the set of all finite strings, which is countable. I assume you meant to say something else, but I can't guess what.) The amazing thing about Gödel's theorem is how general it is. I mentioned that in the post. Any Turing machine that prints theorems (regardless of the internal mechanism) must obey Gödelian limitations, as must any Turing machine that receives proofs and checks them for correctness by any method. The only way to escape these limits is by hypercomputation, but I wouldn't hold my breath.

I think the central question here is, simply put, to what extent should we allow ourselves to participate in politics. Seeing as we are already participating in group discussion, let's assume a political dimension to our dialogue exists with or without our explicit agreement on the subject.

That having been said, I applaud the author for summarizing so many topics of political debate associated with the neoreactionary school. I feel like this conversation has been derailed to some extent by questions of whether the author has represented his sources accurat... (read more)

I'm sorry but I think this article's line of reasoning is irreparably biased by the assumption that we don't see any evidence of complex technological life in the universe. It's entirely possible we see it and don't recognize it as such because of the considerable difficulties humans experience when sorting through all the data in the universe looking for a pattern they don't recognize yet.

Technology is defined, to a certain extent, by it's newness. What could make us think we would recognize something we've never seen before and had no hand in creating? M... (read more)

6Stuart_Armstrong
Yes, it's possible. But that argument proves too much: any observation could be advanced technology that we "don't recognize [...] as such". The fossil record could be that, as far as we can tell, etc... We have to reason with what we can reason with, and the setup of the universe - galaxies losing gas and stars, stars burning pointlessly into the void, supernovas incredibly wasteful of resources, etc... - all points to natural forces rather than artificial. It still could be artificial, but there's no evidence of it.