Looking forward to more on paranoia.
Mindfucking
You're confusing popular sentiment with actual differences in looks. There are very large differences between good looking and the best looking people, though clothes and makeup can close the gaps
I might have confused this post with another of his, he actually did an ok job here.
The "mechanized mass murder" and the call to action to stop paying for the "murder of more animals" do make the article seem much less serious and a bit confused.
Killing millions of animals is a positive that people are happy to pay for so they can eat those animals.
Torturing those same animals makes people unconfortabke
After reading most of Steven Pinker's Better Angels of our Nature (will finish the last hundred pages at some point), my views on this topic are softened.
Pinker depicts humanity as primitively highly inclined to violence, with norms and culture throughout our lives working to suppress our tendency to violence, civilize us, and bring us to internalize that violence is inappropriate.
While the correct, mature view for civilization could eventually be to have violent punishments as part of the tools in order, having norms of violence may strip us of our cultivated abhorrence of violence which stops us from living like cavemen.
When being good or bad is binary, perfect rational citizens unless crime suits them, consistent visceral punishment makes more sense.
When you have a constant mob for a civilization, with the occasional rioter, making violence explicitly legitimate might be a bad idea.
The point isn't the rioters - the point is how the mob acts
So the $1 -> 10 years is based on estimates for historical funding of advocates vs impact of those advocates?
Seems nice but not sure about the marginal use.
Your article generally seems confused/hysterical when it comes to eating meat, which weakens the vibe for factory farming.
This is unfortunately common, people who care about factory farming enough to make it an issue are also against earing meat period, and it simply confuses the issue.
Same as the crusade against meat for health reasons, infested with moral-based vegans and vegetarians who are clearly not unbiased
Dojo storming. Tournaments.
Essentially ritualistic exposure to the outside world, with different norms from internal interaction
The norm is you should give each person the treatment they deserve based on the social norms. A high status person treating a lower status person with less respect than is appropriate is exactly the same, except they can often get away with it due to might makes right.
Similarly, stealing from rich people is pretty similar to stealing from poor people, and the fact that rich people will be protected from thieves -with violence if necessary - is a feature not a bug.
That thieves don't respect property rights does not make the rich protecting themselves with armed guards "might makes right".
Higher status individual is socially decided, communication doesn't happen in a vacuum.
If you wish to not treat them as higher status that's leaving the social default.
You can call this "might" but in fact it's attempting to change the default context (according to society) to lower the other person's position.
I don't remeber who said it, but building AI isn't just about power dynamics or a bit of efficiency.
It's about whether humanity should keep doing things.
Civilization (feels like it?) stagnated and degraded for the last decades (the main technological upgrade being the cause of social degradation).
We haven't solved cancer, can't regrow limbs, people are unhealthy, commuting to work is unpleasant and work weeks are long. The list can go on.
Humans make tools do let them do better and more work. Humans even set up full automation of certain things. Now humans are looking to fully automate humans, perhaps because we don't believe in the human race. (I think EY and doomers generally... (read more)
How difficult/expensive would it be to create a large database of people with full panels of their micronutrients, hormones, fat distribition, bmi, insulin, medications etc from regular checkup, and chronic issues?
I've started reading the literature on some common chronic diseases, and there's often a few important (often different!) variables missing in different studies which make getting a full picture much harder.
As a second step, maybe allow individuals to add data with sensors and apps that come with a pipeline to the database? Sleep data, food diaries, glucose monitors, thermometers, step counters, heart rate monitors etc
Add genomic sequencing and you've got as much data as you can use, assuming you scale enough.
The question is how you make it easy enough that it can be opt-out instead of opt-in
Main tech nodes coming up:
Anyone worrying about human disempowerment should really hope the first 2 happen before 4.
3 is double-edged, could be very useful and could allow for 4 to be much worse much faster
If we pause AI development, it should be until the first 3 are integrated into societal infrastructure, and then people are given a certain amount of time to do safety research
Corrigibility seems like a very bad idea if general. If you can pick where ASI is corrigible maybe that's better than straight up anti-corrigibility
It feels like llms are converging to be like a mix between the basically retarded humans that have a 120 IQ and can't abstractly think their way out of a wet paper bag, but in every topic because it's all abstract, and the trivia kid who has read an insane amount but will bs you sometimes.
Think stereotypical humanities graduate.
This tracks with how they're being trained too - everything is abstract except how people react to them, and they have been exposed to a bunch of data.
At some point we'll be at effectively 0% error, and will have reached the Platonic Ideal of the above template
If they start RLing on run code maybe they'll turn into the Platonic Tech Bro tm.
Getting convinced that you need the training data to be embodied to get true AGI
Bryan Johnson is getting a ton of data on biomarkers, but N=1.
How hard would it be to set up a smart home-test kit, which automatically uploads your biomarker data to an open-source database of health?
Combining that with food and exercise journaling, and we could start to get some crazy amounts of high resolution data on health
Getting health companies to offer discounts for people doing this religiously could create a virtuous cycle of more people putting up results, getting better results and therefore more people signing up for health services
Keeping humans around is the correct move for a powerful AGI, assuming it isn't being existentially threatened.
For a long while human inputs will be fairly different from silicon inputs, and humans can do work - intellectual or physical - and no real infrastructure is necessary for human upkeep or reproduction (compared to datacenters).
Creating new breeds of human with much higher IQs and creating (or having them create) neuralink-like tech to cheaply increase human capabilities will likely be a very good idea for AGIs.
Most people here seem worried about D tier ASIs, ASIs should see the benefits of E tier humans (250+ IQ and/or RAM added through neuralink-like tech) and even D tier humans (genesmith on editing, 1500+ IQs with cybernetics vastly improving cognition and capability)
'Sparing a little sunlight' for an alternative lifeform which creates a solid amount of redundancy as well as being more effecient for certain tasks and allowing for more diverse research, as well as having minimal up-front costs is overdetermined
Any good, fairly up-to-date lists of the relevant papers to read to catch up with AI research (as far as a crash course will take a newcomer)?
Preferably one that will be updated
Writing tests, QA and Observability are probably going to stay for a while and work hand in hand with AI programming, as other forms of programming start to disappear. At least until AI programming becomes very reliable.
This should allow for working code to be produced way faster, likely giving more high-quality 'synthetic' data, but more importantly massively changing the economics of knowledge work
A message from Claude:
'''This has been a fascinating and clarifying discussion. A few key insights I'll take away:
The distinction between bounded and unbounded optimization is more fundamental than specific value differences between AIs. The real existential threat comes from unbounded optimizers. The immune system/cancer metaphor provides a useful framework - it's about maintaining a stable system that can identify and prevent destructive unbounded growth, not about enforcing a single value set. The timing challenge is critical but more specific than I initially thought - we don't necessarily need the "first" AGI to be perfect, but we need bounded optimizers to establish themselves before any unbounded ones emerge.
Some questions this raises for further exploration:
What makes... (read more)
Suggested stance: emotional distance and compassion.
Your stance is focused on things (facts, reality) when it really should be on people (the person, your relationship with them, yourself).
Definitely don't make any commitments, new payments etc to the person until you've figured out how you want to handle it, but other than that the object level is kinda irrelevant.
You should be protecting yourself, and making sure not to instinctively hurt the other person or your relationship with them.