Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

A few things stood out to me here; namely, a steelman and a strawman.

You make the sweeping claim that "AI can bring - already brings - lots of value, and general improvements to human lives", but don't substantiate that claim at all. (Maybe you think it's obvious, as a daily user. I think there's lots of room to challenge AI utility to human beings.) Much of the "benefits of AI" talk boils down to advertising and hopeful hype from invested industries. I would understand claiming, as an example, that "AI increases the productivity or speed of certain tasks, like writing cover letters". That might be an improvement in human lives, though it depends on things like whether the quality also decreases, and who or what is harmed as part of the cost of doing it. 

But this should be explored and supported; it is not at all obvious. Claiming that there is "lots of value" isn't very persuasive by itself -- especially since you include "improvements to human lives" in your statement. I'd be very curious to know which improvements to human lives AI has brought, and whether they actually stand up against, not just the dangers, but the actual already-existing downsides of AI, as well.

Those are, I feel, the strawmen in this argument. And I apologize if I make any inadvertently heated word-choices here; the tactic you're using is a theme I've been seeing that's getting under my skin: To weigh AI's "possible gains" against its "potential dangers" in a very sci-fi, what-might-happen-if-it-wakes-up way, while failing to weigh its actual, demonstrated harms and downsides as part of the equasion at all. This irks me. I think it particularly irks me when an argument (such as this one) claims all the potential upsides for AI as benefits for humans in one sweeping sentence, but then passes over the real harms to humans that even the fledgling version of AI we have has already caused, or set in motion. 

I understand that the "threat" of sentient supercomputer is sexier to think about -- and it serves as a great humblebrag for the industry, too. They get to say "Yes yes, we understand that the little people are worried our computers are TOO smart, hahaha, yes let's focus on that" -- but it's disingenuous to call the other problems "boring dangers", even though I'm sure there's no interest in discussing them at AI tech conventions. But many of these issues aren't dangers; they're already-present, active problems that function as distinct downsides to allowing AI (with agency or not) unfettered access to our marketplaces. 

Three of many possible examples, open to argument, but worthy of consideration, of already-a-downside "dangers" are: Damage to the environment / wasting tons of resources in an era where we should definitely be improving on efficiency (and you know, feeding the poor and stuff, maybe rather than "giving" Altman 7 trillion dollars). Mass-scale theft from artists and craftspeople that could harm or even destroy entire industries or areas of high human value. (And yes, that's an example of "the tech industry being bad guys" and not inherent to AI as a concept, but it is also how the real AI is built and used by the people actually doing it, and currently no-one seems able to stop them. So it's the same type of problem as having an AI that was designed poorly with regards to safety: Some rich dudes could absolutely decide to release it in spite of technicalities like human welfare. I mention this to point out that the mechanism for greedy corporations to ignore safety and human lives is already active in this space, so maybe "we're stopping all sorts of bad guys already" isn't such an ironclad reason to ignore those dangers. Or any dangers, because um, that's just kind of a terrible argument for anything; sorry.) And a third one, the scrambling of fact and fiction[1] to the point where search-engines are losing utility and people who need to know the difference for critical reasons -- like courts, scientists and teachers -- are struggling to do their work. 

All of which is a bit of a long way to say that I see a steelman and a strawman here, making this argument pretty weak overall. I also see ways you could improve both of those, by looking into the details (if they exist) of your steelman, and broadening your strawmen to include not just theoretical downsides but real ones. 

But you made me think, and elucidate something that's been irritating me for many months; thank you for that!

[1] I object to the term "hallucination"; it's inaccurate and offensive. I don't love "fiction" either, but at least it's accurate. 
 

Thank you for writing this - it's a really useful and accurate view, I think. I too deal with both of these mental bastards and you're right, it can be hard to see them separately; but this is almost exactly how it is for me too and I'm glad you shared it.

Reading this also made me have a bunch more thoughts about monotropism, which I've been studying with great interest lately. The depression physical movement thing you describe (which yup, hard same here) feels like it must be related to my high monotropism somehow, and I'm looking forward to looking into the link more. (You may look monotropism up if you're interested, but i just wanted to share that your post gave me a good lead on a useful idea, which i appreciate!)

I like this theory/method a lot, and am excited to try it -- commenting here to increase the chances that I'll remember / be able to let you know how it goes. I'm a fast typist, but am often stymied by obstacles that it sounds like babbling-a-draft might help overcome -- in fact, whenever I "can't write", I often find myself talking through my ideas to the empty air. So recording those leaking-out-thoughts in a format that can then be edited is a tempting idea. Thank you for the details about why it works for you; they convinced me it was worth a try (and I'll try it for a few weeks at least, to be sure I've gotten past the initial awkwardness).

You may find the biography of Bertrand Russell and his life's work very interesting, I think. He set out to prove that mathematics is in fact the basis of all things, and that all things could be discovered and understood through pure logic if only our logic system was good enough. And yet, he failed, and his master work wound up proving the opposite: that in fact something else, something un-logical in its nature, has to underlie mathematics and logic. It sort of drove him crazy, and makes for a fun story as well as perhaps a good warning for those who would cut out what you're calling Intuition from the process of discovery.

If you have lots of extra time and want to go further, looking into kungfu may also be fruitful: it's framed very differently, but as a path of knowledge, kungfu insists that would-be discoverers of its secrets must both practice and experiment rigorously, and make themselves into good instruments of intuition and the reception of flashes of insight. As a style of self-education, it's unique and has a lot to offer about using those two elements - intuition and reason - together, imo.

I have an example of this! I'm also in total agreement with you about what makes the app cool, and it's partly because I immediately related it to the following.

I'm a martial artist, and I teach newbies regularly as part of my training, and a friend I was teaching told me I had to read a book about tennis, which *really* confused me. (I also thought at first that he meant Infinite Jest, but he didn't. :P) The book is The Inner Game of Tennis, and it was a huge hit in the 80's that spawned a lot of useless, ignorable, X-For-Dummies-type spinoffs. But the original is one of the best books on learning I've ever read period, and it both confirmed and revolutionized everything I knew about learning and teaching martial arts (and swimming and singing, which are also interests of mine -- hello, friend!).

The book is by a dude who trains tennis players, who's played his whole life, and who had SILLY successes with total noobs and never-been-athletic people, and finally wrote down how he was doing it. And there's a ton of great detail in the book about specific methods and tiny brain things that, for the most part, it's very easy to translate out of tennis and into a million other things (the tendency for spinoffs to happen makes sense here, at least); but the main crux of it is almost exactly what you're describing with the app: Give the brain "what it feels like to do it right", and then let it constantly compare what you're doing to that, in real-time, and adjust all the little sub-skills however it needs to to get the right result. Everything else is just sauce; Practice is the brain making that in-the-moment comparison and learning to adjust for it, and Practice is everything to developing skill. Thus, by focusing directly on training people to feel the difference, in the now, when hitting the ball, he trained them in the one fundamental of tennis that would give them the fastest possible access to actual expertise: How to Practice.

Anyway, I think I've said enough; you should definitely check out the book if you're interested in further information on this lesson about learning; and I'm super geeked that there's a singing app that uses this type of feedback, it sounds like, to excellent effect! I'm also equally annoyed that it isn't for Android, but that's not your fault. :) Thanks for the awesome post!

That's been pretty much exactly my experience as well, with the possible addendum that I work really hard to make sure I can sleep as long as I want if I notice that I might be getting sick, since if I catch it early, doing this is VERY likely to prevent the illness altogether.

Studies, no. I wrote a book (ubersleepbook.com, if ya'll don't mind me dropping that link -- if it's verboten, I'll remove it and sorry) that compiles as much as I've been able to get ahold of as far as information after a decade of running a site and communicating with people on the subject, and it has chapters that address your other two (very good!) questions. The short answer is: AFTER adaptation, polyphasic sleep copes with events (including sickness, travel, and "just life") just like monophasic sleep does, only in a compressed / hyperefficient manner. DURING adaptation it's super strict and will get thrown off by these, but once it's well-ingrained, things work surprisingly similarly -- just shorter.

I've developed a hilariously pavlovian response to songs I used for alarms at some point or another -- I can still hear "The Authority Song" by Jimmy Eat World and, if I'm sitting or reclining, feel a physical itch to stand.

I only use a very quiet beepy thing anymore, or my phone if that's what I've got, and it usually doesn't even go off before I wake up (I deliberately set alarms a few minutes later than I'll wake up so that I have a chance to get up and pre-emptively shut them off), but for a while using songs was a fun way to play with the ol' brain!

I've been some kind of polyphasic for a solid decade (more, but with breaks that bring it to about that overall). I use an alarm if my schedule is changing -- i.e. I'm doing a day of Uberman to get more done; or I missed a nap and so am sleeping 4.5h tonight instead of 3 -- but even then I often don't need it. Once I'm on my regular Everyman 3 schedule for a few days straight, no alarms are necessary, including popping right awake at 4am feeling great. I only use alarms for naps anymore if I want to read when I wake up, so that I don't get sucked into my book and waste too much time; I wake up so reliably after 20 minutes that my friends have used me as a timer.

I love being made of programmable firmware. ;)

I would hope that I'm not the only source that insists on limiting or eliminating driving for at least the few really hard days of an Uberman adaptation, yeah. Also, you know, don't perform surgery or operate giant cranes. Just in case we needed to add that. ;)

Load More