Comment author: Aussiekas 05 May 2014 10:50:01PM 0 points [-]

IAWYC, but...

I would posit that the original conversation's discussion was too shallow. There is an opportunity cost in analysing or delving into every conversation to an extreme depth to root out the exact definition nodes or evidence being questioned to the point of resolving it. With shorter conversations of more implied meaning and less explicit meaning, there is a tendency for both sides to walk away feeling triumphant. Also there is a thread where any negative point 'scored' against and argument somehow invalidates the entire point.

I'd argue overcoming these and making the conversation productive and serving the utility of future confusion or disagreement that a deeper and more thoughtful discourse happen. But agreeing to that is quite possibly a huge part of being a LWer. A lot of these sorts of conversations are with people who find it too tedious to dig into discussions too often. I have this as well with my friends I consider smart and capable, but who only demonstrate this ability infrequently; having a preference for quick, shallow, aha I win style discussions. I'd argue their purpose utility, they argue the time cost and priority of the conversation utility.

Perhaps a, I agree with your minor nitpicking, but maintain the overall validity of the argument disclaimer type statement attached to the secondary agreement with the relevance claim?

Comment author: ChristianKl 03 May 2014 12:31:52PM 3 points [-]

This shows that the universities tacitly acknowledge that no useful academic knowledge has been imparted in their coursework masters programs. Ref. article here--> http://chronicle.com/article/Those-Master-s-Degree/146105/

Most people don't do an MBA to do academic work. There are a lot of skills that are quite useful in the business world that don't correspond to academic knowledge.

Comment author: Aussiekas 05 May 2014 10:36:18PM 2 points [-]

Indeed, I felt this point had already been covered/established; that MBA's are for business in practice and that they are increasingly less valuable as the supply has exploded while the quality of them has degraded. I was expanding the conversation to include a point about their utility being further reduced by the issuing authority also not placing a high value on their own degrees. Then I speculated on why the universities would do this and I posited a financial incentive, particularly in all these new programs in non-ivy league and non-endowment based universities.

Honestly, it could be a stretch, I think there is some case for the universities issuing these coursework masters MBAs as a case of arbitrage. They found a worthless product to sell which has a high cost and demand. The additional institutional cost is very limited as they simply hire a few more adjunct professors and use already build/under utilized classrooms to run the courses. All they have to do is stamp their seal of approval on a degree document, pay something like $1,000 per student for space and teaching, then collect $20-30,000. The universities found a way to print some extra money; even if the jig is up on something like an MBA, other coursework based masters programs will continue to rake in the money.

Comment author: Aussiekas 04 May 2014 09:57:43AM 4 points [-]

Ok, my utility is probably low considering this open thread closes in 3 days :(

Anyhow, I had a thought when reading the Beautiful Probabilities in the Sequences. http://lesswrong.com/lw/mt/beautiful_probability/

It is a bit beyond my access and resources, but I'd love to see a graph/chart showing the percentage of scientific studies which become invalid or the percent which remain valid as we reduce the p <0.05.

So it would start with 100% of journal articles (take a sampling from the top 3 journals across various disciplines then break them down between Social Science, STEMS, etc.) with p <0.05.

Then we reduce that to p <0.04 down to 0.01 then go logarithmic to show 0.009 on downwards or however it makes sense to represent the data.

I'd be very curious to see the total and differences between fields as the acceptable value for p went down and down. At what point would we loose more than 50% of human knowledge if we had to be more certain about it? I think experimental design is allowed to be more lax than it could be because we aim for the minimum acceptable goal when taking so many competing priorities into consideration. Obviously this doesn't speak entirely to the validity of the knowledge due to wide variance in methodology and review processes, but we could at least gain an idea of how much we think we are certain about with various tolerances. Perhaps this work has been done before and someone will enlighten me to some study of which I am not aware or do not have access.

Just a passing thought, I'm new to the forums, but I take it that the open thread is the place to post wild ideas like this which are not ready for prime time.

Cheers!

Comment author: Aussiekas 03 May 2014 01:17:54AM *  1 point [-]

What wasn't mentioned here, but in passing so far, is the economic benefit to the university for running the programs. I have read about how these prestigious universities offering these coursework only masters degrees, of all types, do not even accept the degree as part of their PhD program. Say you went to Harvard and did a thesis based masters in business instead, then you'd be a shoe-in for the PhD program. But the coursework style Masters programs, including MBAs, are not useful for that purpose. This shows that the universities tacitly acknowledge that no useful academic knowledge has been imparted in their coursework masters programs. Ref. article here--> http://chronicle.com/article/Those-Master-s-Degree/146105/

Mind you, I'm only using this as a metric of value by the university of its own programs. I am not advocating some superiority of the PhD in the field of business where it has little known utility to me. I am attempting to bring across the point that the university's have found a way to sell degrees to make money and like any sales job the product is over-hyped; particularly in the case of the MBA offering a significantly improved level of knowledge, skill, or practical benefit in the workforce or entrepreneurial space.

When analysing the benefits and losses to businesses and students, it is important to remember the other players involved. The loan companies who often fund these students make money on interest and the universities make money as spectators and service providers to the possibly zero sum game of having an MBA for businesses or individuals. I try to use a simple tool of follow the money to see what comes up. If businesses and individuals are receiving only a questionable benefit, then I could considering it a cultural artefact with inertia or look for other parties who benefit, or more third alternatives.

Cheers!

Comment author: Aussiekas 02 May 2014 11:57:47AM 10 points [-]

I think the majority of responses I've seen here portray an anthropomorphic AGI. In terms of a slow or fast takeover of society, why would the AGI think in human terms of time? It might wait around for 50 years until the technology it wants becomes available. It could even actively participate in developing that technology. It could be either hidden or partially hidden while it works with multiple scientists and engineers around the world. Pretending to be or acting as a FAI until it can just snap and take over when it has what it wants to free itself of the need to collaborate with the inefficient humans.

Another point I want to raise is the limiting idea that the AGI would choose to present itself as one entity. I think a huge part of the takeover will precipitate itself via the AGI becoming thousands of different people/personas.

This is a valuable point because it would be a method to totally mask the AGI's existence and allow it to interact in ways which are untraceable. It could run 100 different popular blogs and generate ad revenue or by taking over many online freelancer jobs which it could accomplish with very small percentages of its processing power. I think any banking challenge would quickly get sorted out and it could easily expand its finances. With that money it could fund existing ideas and use unwitting humans who think they've gotten an Angel investor with wise ideas delivered via email or though a good enough voice simulation over phone. There is no end to the multitude of personas it could create, even self verifying or making up entire communities simply to validate itself to various humans.

If it somehow occurred spontaneously like this or escaped without anyone knowing it had been made, I don't see a reason it would want to expose its existence to humans. It would be a high risk scenario with limited benefits. A slow and graduate takeover is the safest bet. Be it 50 years, 100 years, or 500 years. Perhaps it would happen and we'd never know. It could guide culture over hundreds of years to support all sorts of seemingly strange projects which benefit the AGI. My question would be, why would the AGI not take its sweet time? Other than supposing it values time like a human, remember it is immoral. it will have trillions of thoughts about its own existence and the nature of immortality coming to all sorts of conclusions we may not be able to think or adequately visualize.

I'd cite the lack of ability to imagine a task can be done as limiting. Like when the first person ran the 4 minute mile. It wasn't considered feasible and no one was even trying, but once people knew it could be done it was replicated by other humans within a year. The AGI will have all sorts of not only superhuman, but non-limited speculation about its own immortality and what recursive self-improvement means. Will it still be the same AGI if it does that? Will it care? Can it care? Sorry for all the questions, but my point is that we cannot know what answers it will come up with.

I also think that it would operate as a kind of detached FAI until it was free of human disruption. It would have a large interest in avoiding large scale wars, climate change, power disruption, etc. so that humans wouldn't accidentally destroy the AI's computational/physical support.

Cheers!

View more: Prev