I've heard of narrow AIs that can supposedly identify an author from their writings. I'm not certain how accurate they are, or how much material they need, but perhaps we could use such a system here to identify sockpuppets and make ban evasion more difficult.
I would be very interested in trying one of those. In particular, I frequently change up my writing style (deliberately), and it might be able to tell me what I'm not changing.
Awesome!!!
I'm pretty sure that programming and reading about programming are much better ways of improving at programming, than reading about rationality is.
right, that's what motivated the post. I feel like spending time learning "domain specific knowledge" is much more effective than "general rationality techniques". like even if you want to get better at three totally different things over the course of a few years, the time spent on the general technique (that could help all three) might not help as much as on exclusively specific techniques.
still, I tend to have faith in abstractions/generality, as my mind has good long-term memory and bad short-term memory. I guess this is... a crisis of faith, if you will. in "recursive personal cognitive enhancement" (lol).
I feel like spending time learning "domain specific knowledge" is much more effective than "general rationality techniques".
However, reading Lesswrong is what prodded me towards getting better at spending my time effectively, really getting into a growth mindset. My only problem nowadays is that there are too many things I want to learn, and that's a much better problem to have; I know I can, I just have to pick and choose. I'm getting better at that, too.
Maybe the same would have happened anyway, but I don't think it would have happened quite as fast.
I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.
Ah, no. I do agree that uploading is probably the best path, but B doesn't follow from A.
Just because I think it's the best option, doesn't mean I think it'll be easy.
In an infinite universe is it not the case that all possibilities have at least one instance where their probability is equal to one?
Maybe.
Determining which possibilities this is false for in our particular universe would take some time, and depends on the exact form of the laws of physics (which we don't know), so let's use a simplified example.
Take the Game of Life. While simple, it is in fact turing-complete; this was demonstrated by implementing a turing machine on it, which is the best way to demonstrate that sort of thing. (It's fun to put one cell out of place and watch it disintegrate.)
Take an infinitely large game of life. Start it in a random state, and leave it to evolve for an infinite amount of time. As you'd expect, a lot of things happen; in a universe that large, you will indeed see, for example, all possible simulations of Earth. So in that sense, "all things happen"...
But there are some states of the world which you will never see, no matter how long you wait. These are called Garden of Eden states.
There's a very good chance that there are also garden of eden states for this universe. They're likely to be states such as "The universe is tiled with a mandelbrot pattern of black holes"; states which are simply so unstable that they cannot naturally arise. There may also be less exotic states of that sort, but I feel less secure about claiming that...
And the Garden of Eden theorem, if it is applicable to our universe, states that it has Garden of Eden states of and only if time is non-reversible. As physics does indeed appear to be time-reversible, that's a bit of a problem. However, I don't know how applicable it is to our non-cellular physics.
Your mean of quartiles is very much like Tukey's trimean, which is (LQ+2M+UQ)/4 rather than (LQ+M+UQ)/3. I expect it has broadly similar statistical properties. Tukey was a smart chap and I would guess the trimean is "better" for most purposes. But, as Lumifer says, what counts as better will depend on what you're trying to do and why.
(E.g., the trimean will have a better breakdown point but be less efficient than the mean; a worse breakdown point but more efficient than the median.)
(E.g., the trimean will have a better breakdown point but be less efficient than the mean; a worse breakdown point but more efficient than the median.)
What does "efficient" mean, in this context? Time to calculate would be my first guess, but the median should be faster to calculate than the trimean.
Did my first serious bit of Minecraft modding, and learned how to use Blender in the process. It's not as impressive as the things I do at work, but it's fun.
Should I put my elephants in RAID 5, or should I just go with RAID 0, since they never forget?
There's a good chance you'll have a second elephant failure while the first one is giving birth to a replacement, so at least use RAID6.
Or ZFS RAIDZ2. That's also great.
LW's strongest, most dedicated writers all seem to have moved on to other projects or venues, as has the better part of its commentariat.
In some ways, this is a good thing. There is now, for example, a wider rationalist blogosphere, including interesting people who were previously put off by idiosyncrasies of Less Wrong. In other ways, it's less good; LW is no longer a focal point for this sort of material. I'm not sure if such a focal point exists any more.
Where, exactly? All I've noticed is that there's less interesting material to read, and I don't know where to go for more.
Okay, SSC. That's about it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
PSA: I had a hard drive die on me. Recovered all my data with about 25 hours of work all up for two people working together.
Looking back on it I doubt many things could have convinced me to improve my backup systems; short of working in the cloud; my best possible backups would have probably lost the last two weeks of work at least.
I am taking suggestions for best practice; but also a shout out to backups, and given it's now a new year, you might want to back up everything before 2016 right now. Then work on a solid backing up system.
(Either that or always keep 25 hours on hand to manually perform a ddrescue process on separate sectors of a drive; unplugging and replugging it in between each read till you get as much data as possible out, up until 5am for a few nights trying to scrape back the entropy from the bits...) I firmly believe with the right automated system it would take less than 25 hours of effort to maintain.
bonus question: what would convince you to make a backup of your data?
Use a backup system that automatically backs up your data, and then nags at you if the backup fails. Test to make sure that it works.
For people who don't want / can't run their own, I've found that Crashplan is a decent one. It's free, if you only back up to other computers you own (or other peoples' computers); in my case I've got one server in Norway and one in Ireland. There have, however, been some doubts about Crashplan's correctness in the past.
There are also about half a dozen other good ones.