There are definitely some opportunities like that, but being a classical violinist with an orchestra is the first preference by far because it's so much more enjoyable to play the orchestral repertoire, and because having a full-time seat in an orchestra also puts you at the top of every booking agent's list for casual gigs too. Aim high, fail high, seems to be a good approach.
(To be fair, I would make anything sound this extreme, if I was writing about it while in the mood I was in when I wrote this. I love a rant.)
I guess any classical instrument is a device for torturing perfectionists, but violin has a particularly brutal drop-off in sound quality as you reduce your daily focused practice time. Between 'lapsed professional piano' and 'lapsed professional violin' I know which one I'd pick to listen to. You just can't do a few hours of practice a week and play the violin very nicely in tune, or at least I've never met anyone w...
See here: https://www.lesswrong.com/posts/b9oockXDs2xMdYp66/announcement-ai-narrations-available-for-all-new-lesswrong
Please share your feedback here or in the comments on that post, it's helpful for our decision-making on this :)
Big +1 to playing with others, especially others around the same level or slightly better or worse.
Motivation is one thing, but it's also just... healthier. One's musical 'practice' can't be totally inward-looking, that's when perfectionism starts to bite. Orchestra forces you to compromise and actually learn and perform music, gets you out of the practice room, and generally turbocharges your learning by exposing you to a more varied set of demands on your playing and musicality.
Super hard mode is forming a string quartet with others, since your playing is super exposed and it forces you to stay in time and balance your sound with others.
Thanks for the feedback!
The audio reflecting updates to the text is relatively easily fixed, and that feature is in the pipeline (though for now user reports are helpful for this.)
There's some fairly complex logic we use for lists — trying to prevent having too many repetitive audio notes, but also keeping those notes when they're helpful. We're still experimenting with it, so thanks for pointing out those formatting issues!
You'd probably want to factor in some time for making basic corrections to pronunciation, too.
ElevenLabs is pretty awesome but in my experience can be a little unpredictable with specialist terminology, of which HPMOR has... a lot.
It wouldn't be crazy to do an ElevenLabs version of it with multiple voices etc., but you're looking at significant human time to get that all right.
It's unlikely we'll ever actually GENERATE narrations for every post on LessWrong (distribution of listening time would be extremely long-tailed), but it's plausible if the service continues that we'll be able to enable the player on all LW posts above a certain Karma threshold, as well as certain important sequences.
If you have specific sequences or posts in mind, feel free to send them to us to be added to our list!
Double-attrition perfectionism and the violin
An interesting thing about violin is that the learning process seems nearly designed to produce 'tortured perfectionists' as its output.
The first decade of learning operates as a two-pronged selection process that attrits students at different times in their learning journey, requiring perfectionism at some times and tolerance at others.
You could be boring and argue that it always requires both attention to detail and tolerance of imperfection, simultaneously. You could also argue that there's a fractal, s...
Edit: removed a bad point.
If you object strongly to the use of the term UBI in the post, you can replace it with something else.
Then I make a number of substantive arguments.
Your response so far is 'if it's a UBI it won't suffer from these issues by its very definition.'
My response is 'yes it will, because I believe any UBI policy proposal will degrade into something less than the ideal definition almost immediately when implemented at scale, or just emerge from existing welfare systems piecemeal rather than all at once. Then all the current concerning 'bad things that happen to people who depend on government money' will be issues to consider.
I'm speaking about the policy that's going to be called UBI when it's implemented. You're allowed to discuss e.g. socialism without having to defer to a theoretical socialism that is by definition free of problems.
Anyway, it's a quibble, feel free to find and replace UBI with 'the policy we'll eventually call UBI', it doesn't change the argument I make.
Where do I call existing welfare systems UBI? That's a misunderstanding of my argument.
My point is that I don't think it's likely that future real-world policies will BE universal. They'll be touted as such, they might even be called UBI, but they won't be universal. I argue they're likely to emerge from existing social welfare systems, or absorb their infrastructure and institutions, or at least their cultural baggage.
I can see the confusion, and maybe I should have put 'UBI' in quotes to indicate that I meant 'the policy I think we'll actually get that people will describe as UBI or something equivalent.'
My point is not to argue that existing welfare systems are UBI. I don't use any non-standard definitions. I don't call existing welfare systems UBI.
My point is that the real-world policy we're likely to eventually call UBI probably won't actually be universal, and if it emerges as a consequence of more and more people relying on social welfare, or else is associated with social welfare culturally, bad things will likely happen. Then I give some examples of the sort of bad things I mean.
...I frequently hear people saying something like "and this is why w
Yeah, I agree with a lot of this, and this privacy concern was actually my main reason to want to switch to Obsidian in the first place, ironically.
I remember in the book In the Age of Surveillance Capitalism there's a framework for thinking about privacy where users knowingly trade away their privacy in exchange for a service which becomes more useful for them as a direct consequence of the privacy tradeoff. So for example, a maps app that remembers where you parked your car. This is contrasted with platforms where the privacy violations aren't 'paid back...
The argument is:
1. You probably can't make it universal.
2. If people can be excluded from the program and depend on it, it creates a power differential that can be abused.
3. There are lots of present-day examples of such abuse, so absent a change, that abuse or similar will continue to exist even if we have a UBI.
On the subject of jargon, there's one piece of jargon that I've long found troubling on LW, and that's the reference to 'tech' (for mental techniques/tools/psycho-technologies), which I've seen Duncan use a few times IIRC.
A few issues:
1. It's exactly the same usage as the word 'tech' in the fake scifi 'religion' that must not be named (lest you summon its demons to the forum through the Google portal). They do exercises to give them new mental tools, based on reading the lengthy writings of their founder on how to think, and those lessons/materials/techniq...
One question that occurred to me, reading the extended GPT-generated text. (Probably more a curiosity question than a contribution as such...)
To what extent does text generated by GPT-simulated 'agents', then published on the internet (where it may be used in a future dataset to train language models), create a feedback loop?
Two questions that I see as intuition pumps on this point:
I think this is a legitimate problem which we might not be inclined to take as seriously as we should because it sounds absurd.
Would it be a bad idea to recursively ask GPT-n "You're a misaligned agent simulated by a language model (...) if training got really cheap and this process occurred billions of times?
Yes. I think it's likely this would be a very bad idea.
...when the corpus of internet text begins to include more text generated only by simulated writers. Does this potentially degrade the ability of future language models to model agents, perform logic
The correlation between "bothers to have an opinion on correctness of others' writing" and "knows what the correct answer actually is" seems too high.
(Edit: I'm reading between the lines and assuming you're saying you think the cohort of people who actually care enough about faze/phase to be judgemental about it, but don't themselves know the correct spelling is 'faze', is small.)
This is very interesting. I certainly agree this is our point of difference – I think there's a big cohort out there with strong, judgey opinions about 'correctness' and an active...
Just pitching in on the last two: there's an abbreviated register of speech in English called 'note-taking register' that has crept its way into a lot of parts of speech and writing, including website navigation. Dropping the definite article (or most articles in general) is a core part of that register.
Note taking = abbreviated English register. Has crept into parts of speech, writing inc. website nav. Dropping definite article core part of register.
I suspect dropping the definite article in 'refresh page' is not related to definiteness, it's a linguistic...
Software: Newsfeed Eradicator + Leechblock NG
Need: Resilient self-control/anti-akrasia for web browsing.
Other programs I've tried: Stayfocusd, Forest
The problem with Stayfocusd and any website blocker is that, invariably, you have to navigate to a given tweet or youtube video or facebook profile, for legitimate reasons, and it means you have to go and deactivate the plugin. This is bad because 1. it trains you to do this action and 2. It incentivises you to avoid making deactivating the plugin too tricky.
Newsfeed Eradicator kills only the problem parts of ...
I do the SSC Podcast; one of my patreon supporters said he'd be really keen to have this as an audiobook. I'd certainly be keen to get an idea of the demand for that and could potentially make it happen if it seemed like it would be useful. If you wanted to chat about it you can get me on slatestarpodcast@gmail.com. Thanks!
Under MWI of QM, anthropics gets weird.
In a single universe interpretation, we can posit biogenesis is rare, but we do know it happened at least once in ~two trillion galaxies worth of stars in ~13 billion years.
In MWI it could be even rarer - with unlimited branches for wild coincidences of chemistry to occur, we’re necessarily living in a branch where such did occur. Allow for argument’s sake that biogenesis is so rare that branches where life is found are tiny in measure. We find ourselves in such a branch, so anthropics and branching kind of gives us t... (read more)