All of belkarx's Comments + Replies

yes. but they were subtle. 

Oh I totally forgot to mention control theory, add that.

  • ctrl theory: brian douglas on yt
  • 3d sketching: just draw things from models you'll get better QUICK
  • optics, signal processing: I learned from youtube, choice MIT lectures, implementing sims, etc but there are probably good textbooks
  • abstract algebra: An Infinitely Large Napkin (I stan this book so hard)

This may be a me thing but I draw stuff out when I ideate (esp w hardware) and more dimensions -> better physical models -> better, faster iteration speed mental models

1Johannes C. Mayer
In principle, this seems quite plausible that it could be helpful. I am asking if you have actually used this and if you have observed benefits.
  • the enneagram fears and motivations. Good compression of a lot of people. 
  • IMPROV
  • better 3d sketching
  • architecture (think burglar‘s guide to the city), urbex
  • optics (lotsa good metaphors)
  • signal processing
  • abstract algebra
2Johannes C. Mayer
How is 3d sketching good? I don't understand. I guess it's like a whiteboard, but in 3d (I assume you are talking about the VR thing). Could you explain how you think this is useful? How have you used this in the past? What could you do using 3d sketching that you could not do before (or that got significantly easier to do).
1FinalFormal2
Thanks for the response! Do you have any recommended resources for learning about 3d sketching, optics, signal processing or abstract algebra?

side comment that I've been reminded of: epigenetics *exist(s?)*. I wonder if that could somehow be a more naturally integrate-able approach

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4251063/
 

  • I like the premise. I'm glad this is getting researched. But:
  • Lots of things in the space are understudied and the startup-vibe approach of "we'll figure this all out on the way because previous papers don't exist" seems way less likely to work with bio than tech because of the length of iteration cycles. But props if it does?
    • Black swan effects of polygenic edits
      • cellular stress if on a large scale?
      • might be an exception where pleiotropy does actually matter, which would suck. the table in another comment showing correlations between illnesses is pretty convi
... (read more)
4GeneSmith
We already expect that too many editor proteins in the cells could be a problem. But that will show up in cell culture experiments and animal studies and we can modify doses, use more efficient editors, and do multiple rounds of editing to address it. We also know about liver toxicity from too many lipid nanoparticles, but that's an addressable concern (use fewer nanoparticles, ensure they get into the target organ quickly) I'm sure there will be others that I don't expect, but that's true with literally every new medical treatment. That's the whole point of running experiments in cell cultures and mice. We actually have some pretty good studies on plietropy between intelligence and other traits. The only consistently replicated effect I've seen which could be deemed negative is a correlation with mild aspbergers-like symptoms. But you can just look at current people already alive to test the plieotropy hypothesis. Do unusually smart people have any serious problems that normal people don't? The answer is pretty clearly "no". I expect that to continue being the case even if we push intelligence to the extremes of the current human range. This is true almost by definition for any new technology. I would be very surprised if a pharmaceutical, or even a bunch of pharmaceuticals could replicate the effects of gene editing. Imagine trying to create a set of compounds that coul replicate the effects of gene editing: you would need thousands of different compounds to individually target the pathways affected by all the variants. And you would need pharmaceuticals that would modulate their activity based on the current state of the cell. After all, that's how a lot of promoters and repressors and enhancers work; their activity depends on the state of the cell! I still think people should work on nootropics. And you may of course be right that this won't be ready before AGI. I'd put the odds at maybe 20%. But it COULD actually work! And if it did the impact would be

Do the people who contract things out because their time is worth $n/hr or whatever actually keep track of how many “extra” hours they work on top of their basic expenses such that they know how much work they can practically expense out? Or is this a thing that people just say and don’t stop to actually think through. Very much has the vibes of a family with 300k+ takehome or whatever living paycheck to paycheck cuz they’re too good for certain things

There is no such thing as the present, and you are experiencing everything that can possibly be experienced

Deja vu is actually the only time you’re not repeating things infinitely

no creative, original thought exists. everything has been thought, and you've just forgotten. you know everything, you just don’t know that you know everything 

An actually appropriate replacement for what literature should be trying to develop is debate

1Epirito
Sure. But would you still feel the need to replace it if you lived in a world where it wasn't taught in school in the first place? Would you yearn for something like it?

I would be interested in dialoguing about:
- why the "imagine AI as a psychopath" analogy for possible doom is or isn't appropriate (I brought this up a couple months ago and someone on LW told me Eliezer argued it was bad, but I couldn't find the source and am curious about reasons)

- how to maintain meaning in life if it seems that everything you value doing is replaceable by AI (and mere pleasure doesn't feel sustainable)

2Garrett Baker
I wouldn’t mind talking about the meaning thing you’re interested in.

Research on the effectivity of hypnosis as an analgesic and in general

How does cyronic neuropreservation consider the peripheral and enteric nervous systems? why do they assume CNS is enough?

2Viliam
As far as I know, you have both options, but freezing only the head is cheaper. My guess would be that the head is not everything, but it is more than 90% of "everything". I may be wrong; there is a lot of guessing here. I suspect that more people would freeze their entire bodies if cryonics became much cheaper.

I had been communicating with someone who had had great success and very fine control with modification, so that was a clear “this is possible” (they were much more careful though!), and I was also reflecting a lot on how people don’t explicitly take advantage of their self modifying properties enough (it is amazing that we can just … will thoughts and goals into existence and delude ourselves like what?? and the % of people that meditate is low??! the heck?). 
 

I think my success was mostly due to just being in a frame of mind that made me very r... (read more)

Bad sleep schedule is another good example of something that gets romanticised when it shouldn’t.

5Neil
Yep. The worst thing I've seen romanticized in my milieu though is poor mental health: for some reason it's quite "cool" to say you're depressed all the time, and while I know some of my friends are actually genuine, I'm not comfortable with the social pull toward the depression aesthetic. It's so weird.

There is a strong analogy to be made between human psychopaths and misaligned AI

2RHollerith
Eliezer warns against trying to use that analogy to reason about misaligned AI (for example in his appearance on the Bankless podcast).

I wonder if having a significant other work by you so you can see each others' screens would have a similar effect - I assume the effect is diminished because it's a more familiar relationship, but it might work out? Has anyone tried this?

2Portia
Yes. The effect is diminished, despite the fact that my girlfriend is my colleague, and I value her immensely, and she is a workaholic. I suspect because it is impossible to keep up a 100 % productivity facade in front of her, so there is basically precedent for being lazy in front of her. I also know she loves me and is understanding, so she will be supportive if I reschedule sessions. And finally, she is a potential distraction herself - she's hot, and that comes with the definite alternative of sex or snuggles to this work task I do not know how to approach. For us, it worked when we both had offices in the same university building. I do not laze around at university. I never use my workspace or laptop there for anything not work related. There are no available offline distractions beyond us playing ping pong in the breaks. And the whole work context feels like a scenario where you have to be put together. I can't just be in my yoga clothes or my snuggly bathrobe, but I am dressed professionally, my hair doesn't look crazy, and this physical presentation affects how I hold myself, and what I expect to do. So in that context, we are colleagues only, and hence, also only see each other being either productive, or taking genuinely necessary and rejuvenating breaks.

What can a researcher even do against a government that's using the AI to fulfill their own unscrupulous/not-optimizing-for-societal-good goals?

1mruwnik
That is what AI Governance focuses on. The pessimistic answer is that a single researcher can't do anything. Though that really can be said about all aspects of government. Luckily most governments that count are at least sort of democratic and so have to take into account the people. Which implies that a researcher can't do much, but a group of them might be able to. In other words - politics. If you know how the AI works, you could work on coming up with malicious inputs, something like this, but more general. Though now we're going deep into the weeds - most people here are more focused on how to make the AI not want to kill us, rather than not letting it be used to kill us. 

Maybe this is obvious but isn't AI alignment only useful if you have access to the model? And aren't well-funded governments the most likely to develop 'dangerous'/strong AI, regardless of whether AI alignment "solutions" exist outside of the govt sphere?

1mruwnik
Define access. It's more like computer security. The closer you are to the bare metal, the more options you have. If you can manually change the weights of a NN, you can do more than if you can only change its training data. But both will influence the resulting model. It also depends on whether the AI can learn new stuff after deployment - if it can learn new tricks, that means that you can modify it remotely by crafting sneaky inputs. The reason that well funded governments can develop interesting things is because they have enough resources to do so. If you give a private org enough resources, they can also do wonders. Like SpaceX. Of course you could say that they're building on decades of hard work by NASA etc., but that goes to show that after a certain amount of external research is done to blaze the trail, then private companies can come in to exploit the new opportunities. This is sort of happening now with DeepMind, OpenAI etc.

Bias is like turning your head to the side as you walk - if you don't pay attention, you will subconsciously start drifting to the direction you're looking. It's possible to stay on track, but you have to be vigilant about maintaining a straight line

I was the same way, but I honestly do not feel a negative impact from skimming the useless noise off. You should try it! Just catch yourself when you're making short-term, ultimately unproductive observations. It helps to switch to thinking in a language you're less familiar with. Then if you wish, you can return to the super-verbal state of mind

After reading LW more consistently for a couple weeks, I started recognizing rationalists in other parts of The Internet and wondered what were common social medias. My guesses are Twitter, Hacker News, StackExchange, and Quora in about that order, and I will eventually attempt to confirm this more rigorously, be it by demographic survey or username correlation (much less reliable). 

For now, I was particularly interested in finding LW users that are also on Hacker News, so I quickly queried both sites and found ~25% of active LW users had Hacker News ... (read more)

Here's a sample (prompt: Has AI gone too far? A persuasive essay for school, temperature .7):

The introduction of artificial intelligence (AI) into our lives has been met with both excitement and reservation. AI has been used to automate mundane tasks, improve medical diagnoses, and even create artwork. However, as AI continues to develop, the question of whether or not it has gone too far must be asked. The potential of AI is undeniable. It has the capability to revolutionize the way we do things and improve our lives. AI can help us make decisions faster ... (read more)

0[anonymous]
Pretty basic but much better written than OP.

Suggestion: If your teacher is the type to find it amusing (as opposed to threatening consequences for plagiarism or academic dishonesty), give the prompt to GPT-3 (students have free credits) and submit whatever it tells you

6belkarx
Here's a sample (prompt: Has AI gone too far? A persuasive essay for school, temperature .7): The introduction of artificial intelligence (AI) into our lives has been met with both excitement and reservation. AI has been used to automate mundane tasks, improve medical diagnoses, and even create artwork. However, as AI continues to develop, the question of whether or not it has gone too far must be asked. The potential of AI is undeniable. It has the capability to revolutionize the way we do things and improve our lives. AI can help us make decisions faster and more accurately, allowing us to be more productive. It can also help us automate mundane tasks, allowing us to focus on more important work. AI can also be used to improve medical diagnostics, helping to detect diseases earlier and provide more efficient treatments. However, with great power comes great responsibility, and AI is no exception. AI has been used to create algorithms that can be biased and even used to manipulate people. AI has also been used to create autonomous weapons, which could potentially be used to carry out unethical acts. Additionally, AI has the potential to replace human jobs, leading to a rise in unemployment. For these reasons, it is important to consider the implications of AI before its use. We must ensure that AI is used for the betterment of humanity and not to cause harm. We must also ensure that AI is not used to replace human jobs and that it is used in a responsible and ethical manner. In conclusion, AI has the potential to revolutionize the way we do things and improve our lives. However, it is important to consider the implications of its use before it is employed.  

[APPRENTICE]

I'm seeking improvement in hardware design, electrical engineering, signals processing, control systems and neuroscience (each somewhat sporadically but they're all connected). I learn quickly but I'm at a loss for tangible projects, so if you have any suggestions for any of those topics and/or are willing to commit some time to mentoring me in completing them, please share!

[MENTOR]

  • Linux
    • I can consult on a high level if you want to get into the whole Linux/vim/Tor/lineageOS/privacy/ everything is FOSS situation with software recs (best choice fo
... (read more)

Do you know how to get out of docs mode? (nvm - got it - thanks!)

1Ulisse Mini
You're probably in lesswrong docs mode, either switch to markdown or press ctrl+k to insert link around selected text

Not sure if that was a continuation of the satire, but the question interested me so in case anyone else wants to know, here's an article on the origins of that myth: https://www.animal-dynamics.com/the-bumblebee-flight-myth/

I was making reference to the opening lines of Bee Movie, which were an old meme.

I'd love to see more than one data point for each career eventually. How are you scouting out professionals to interview? The idea is overall great - much easier than cold emailing people in professions that interest you and hoping someone's willing to share their experiences/advice

1koratkar
So far it's just been my family members and their friends. I'm going to continue interviewing that pool until I exhaust it (which should take a long time), though I'm not sure what I'll do after that.
2Shmi
Yeah, constructing a memory that feels real is not hard. 

I'd argue that working earlier and having fun are not necessarily mutually exclusive - for example, look at university life. There are a lot of students doing research and other work, while participating in probably the strongest self-discovery of their lives. I also don't think specialization has a significant impact on what forms of self-discovery someone can engage in - software engineering covers a broad variety of things, from working with people to problem solving to time management to creativity and pitching your work

I meant that school generally tries to embed deference to authority. It fades in the real world for certain jobs though.

Why should we cut our young years short?

  1. Brain myelination and information processing speed are highest then. Time is ticking if you want it to be easy to do creative, innovative work quickly. It is, of course, very possible to be successful as an adult with lower levels of neuroplasticity and processing and more "crystallized" intelligence, however adolescents have that particular advantage, differentiating them and making them valuable i
... (read more)
2ajc586
I think it speeds up self-discovery, at the expense of narrowing the domain within which that self-discovery takes place. So if you spend a lot of time as a teenager developing software, you certainly learn more about yourself in terms of your aptitude for developing software. But there's an opportunity cost. I favour unguided self-discovery (a.k.a. "having fun") for longer, because I view self-discovery during teenage years as a global optimization, for which algorithms like simulated annealing tend to find better optima with a higher temperature, albeit taking longer to do so. As a result, I do not favour cutting short 'childhood' so people can be 'useful' sooner. Also, it may be well be that the LessWrong demographic favours intellectual stimulation as "better" than many other things, but for the general population, I don't see evidence this is the case. I know plenty of highly satisfied people, not driven by intellectual stimulation but nonetheless doing things most would regard as valuable to society. But yes, this comes down to subjective philosophy on what is "better" in terms of one's own utility function, and what we should be optimising for.

Can you think of any way to fix the system without forcing everyone into an apprenticeship? The status quo in America right now is respect of the system for most because it's easy and a clean path ... hands-on learning wouldn't appeal to all

0averros
The same argument applies to academic learning... it doesn't appeal to all. What we need is diversity in education - scrap the system of academic accreditation and the prohibitions on "child labor".  So that more academic-inclined kids may go for academic courses while more hands-on oriented kids would join companies and learn on the job (actually there is some of that in family-owned businesses even now; unfortunately the estate taxes tend to destroy generational family business operations).

I haven't seen it, thanks for sharing - I think this post offers much quicker, slightly less directly philosophical view of a different subset of points, but I haven't lurked here for long enough to know what is and isn't redundant

Do you know if this organization still exists or of anything like it? Closest I'm familiar with is the recurse center

1sab
It still operates, but I haven't kept in touch closely enough to know how well it's doing and what has changed – https://talent.edu.pl/. 

dynamic balancing of self-assertiveness vs. deference to authority

The proportion of "deference to authority" is too high, in my opinion.

Knowledge acquisition, on the other hand, can be done via Wikipedia etc. and does not need to occupy school time. People who want to acquire knowledge can do this easily in their bedroom at night. 

This isn't application-based knowledge. I mentioned that students can learn concepts on their own, but what society currently lacks is a path to do something useful with it from a younger age.

Also, I agree that learning soci... (read more)

1ajc586
  In school, or in the real world? And if the latter, what context in particular? In a career context, for example, lower deference to authority (when carefully executed) tends to lead to more rapid promotion, where at the terminus (CEO) everyone in an organisation defers to you. It doesn't seem there's a huge supply/demand imbalance for senior roles, which suggests to me that the self-assertiveness vs. deference balance in working-age society is more or less optimal. Agreed, but why should teenagers being 'useful' be a goal? A century ago, most teenagers did actually do useful things (work in factories etc.) but we've moved away from that these days. Being a teenager is fun, with low responsibility, a lot of free time for self-discovery, etc. We have a lifetime after that to be 'useful'. Why should we cut our young years short?

Do you know of any drawbacks to the apprenticeship system in Germany? I wonder why that isn't more common across the world.

5ChristianKl
In the 19th century, Germany had a three-class voting system and also a three-class school system. While we switched our voting system, we kept our three-class school system. In the US you have the idea of the American dream where in principle anyone can reach the upper class (even when the Americans hate to use the term upper class). If you believe that 12 years of schooling are a requirement for reaching upper class and you want to get everyone to upper class it makes sense to give everyone 12 years of schooling.  In the US class has a lot to do with race. The German class system is built in a way that assumes that important class differences are not about race as it assumes most of the citizens are Germans. In the US middle class often means something like not being Black but there's the pretense that it doesn't.  Americans who want to overcome racism, then do things like letting universities have a quota for accepting a certain number of Black people to give them access to the middle class. From a German perspective, it's very unclear why a plumber who's middle class should have a college degree. It does make sense if you actually want a plumber who isn't Black when you can filter for that by requiring a college degree. If you are in Washington and don't want a Black person to look after your kids but don't want to admit that you don't want a Black person to look after your kids, requiring a college degree for that work makes a lot of sense. It's worth saying that these days German culture isn't very strong and we switch a lot to the Anglo-Saxon way of doing things. 

The usefulness of university depends on the job. It's better for networking than anything.

And yes, I'm just calling some attention to the problem. I've considered a few solutions but nothing stands out as reasonably implementable within our Overton window

The problem is, as a business owner, how do I tell this person apart from the average 14 year old?

This is a limited and subjective answer but there are just some subtle conversational and lifestyle markers of potential (I've talked to a fair few "intelligent" people about this and they agree that you can just tell if someone is of their type). A more reasonable solution is to encourage cold emails along the lines of "hey, I'm taking initiative and pitching myself to you as a resource good for X, here's what I'm interested in, here's why there isn't much ap... (read more)

-13quanticle

School in concept is a great idea. Give the new generation a base of knowledge from which to build. It is just very very poorly implemented. So, I'd say the meetings and other maintenance/organizational devices common in the programming world fall into the same class: useful in theory, essentially useless in practice.

Alternative schools exist, and they output arguably more useful individuals, however they are chosen at the will of the parent. There remain many students stuck in typical public schools, and there should be something they can independently do... (read more)

3quanticle
I have yet to see any kind of "alternate school" actually do a better job of outputting "more useful individuals", however that's defined. I agree that such a school can exist, but given the current parlous state of research when it comes to education and teaching, I have yet to see any kind of firm evidence as to whether such schools do exist.
7James Camacho
I think this needs to be done for >18 year-olds as well. Most research positions require a PhD as a prerequisite, when there are many talented undergraduates who could drop out of college and perform the research after a few weeks' training.

University has value in its connections and the confirmation that a candidate has the requisite knowledge. Many large companies auto-reject anyone without an "upper education" for these reasons, as it's easier to apply that as a filter and miss a couple people than take the time to know everyone's unique situations.

The article also references the subgroup of "competent but apathetic" (which I would subjectively say is common, and the main missed population, as those with perseverance and unrelenting raw passion tend to do well on their own). A lot of peopl... (read more)

1Algon
Wait, I am a bit confused about where we stand. Can we establish as common knowledge that university is mostly about signalling and not about gaining knowledge that is useful in jobs? Do we also agree that apprenticeships/live experience are a better way to gain job relevant skill? Do we also agree that we're stuck in an inadequate equilibrium right now, with strong forces stopping messing things up on the supply and demand side for apprenticeships? And that the purpose of your article was to say "hey, there's a problem here. Think about it some more" rather than to present new insights?