I think he’s talking about coast disease?
https://en.m.wikipedia.org/wiki/Baumol_effect
Are there any plans for an update? One year on, do the ideas discussed still apply?
I also started doing something similar. I’ve thought about rolling over every 6 months in case a black swan flash crashes the value of the options at the time of exercising/selling. Any thoughts on this?
Has Lecun explained anywhere how does he intend to be able to keep the guardrails on open source systems?
I modified part of my portfolio to resemble the summarized takeaway. I'm up 30(!?!) % in less than 4 months.
Could a basic version of this that could help many people with their reasoning easily be set up as a GPT?
I tried it:
https://chat.openai.com/g/g-x4ryeyyCd-rationalist-dojo
But still unhappy with what I am getting. If you have a good prompt to find inconsistencies in your reasoning, please share it!
I used to visit every day since 2018 and find one or two interesting articles to read on all kinds of topics.
For the past few months I just read zvi’s stuff and any AI related not too technical articles.
Some Reddit forums have dedicated days to topics. I don’t know if having AI stuff only a few days a week would help restore the balance haha.
I asked ChatGPT to explain the image, and it pulled a Westworld "Doesn't look like anything to me" reply on the "language model hallucinations are lies" box:
This image is a play on the concept of "alignment charts," which are often used in role-playing games to define character behavior. Here, it classifies different types of lies based on two axes: content (what is being said) and structure (the nature of the lie).
1. **Content purist** vs. **Content rebel**: This axis distinguishes lies based on whether the statement itself is denotatively false (purist) ...
Did you count calories? Did you try to keep the same amount of calories of the replaced meals, but with potatoes?
There’s an app called garden where you enter the name of the people you care about and how often you want to talk to them: once a week, a month etc.
I started using it and being open to people about it. A few mentioned it sounded a bit weird but otherwise I’ve gotten overwhelmingly positive feedback and I’m staying in touch regularly with the people I care about!
The “what I get/what they get from me” columns from this Dunbar exercise are a bit too much for me though.
Got it. Seems to me that it only works on liquid markets right? If the spread is significant you pay much more than what you can sell it for and hence do not get the .09 difference?
Would you have a link to a resource that would help understand that 9% you mention on this comment? How does it work? What shares should be bought in order to have been able to take advantage of this trade? Thanks
I have followed your advice for over a year now, and have this note on my phone with a summary of the regime.
Gym routine
Cardio: Very high intensity routines follow a pattern of a short warmup (5 minutes at a slow pace) followed by several bursts of 10-60 seconds all out intensity. (30 on 30 off for 10 intervals is popular and close to max...
Good point, I can briefly outline how the research on volume has informed how I lift these days.
It used to be believed that intensity was basically irreplaceable, but more and better studies have shown extremely similar effects from lower intensity, approximately down to 60-65% of your 1 rep max, whereas a 4 or 5 rep scheme is going to be around 80-85% your 1 rep max. So I tend to work the listed exercises in the 8-12 rep range. This further reduces injury risk. The exercise choices are good, and I also add in an accessory or two, defaulting to face pulls ...
I think it is a TED talk, just uploaded to the wrong channel.
I asked GPT-4 to develop an end-of-the-world story based on how EY thinks it will go. Fed it several quotes from EY, asked to make it exciting and compelling, and after a few tweaks, this is what it came up with. I should mention that the name of the system was GPT-4's idea! Thoughts?
Title: The Utopian Illusion
Dr. Kent and Dr. Yang stood before a captivated audience, eager to unveil their groundbreaking creation. "Ladies and gentlemen, distinguished members of the press," Dr. Kent began, "we are proud to introduce TUDKOWSKY: the Total Urban Det...
If I understood correctly, he mentions augmenting humans as a way out of the existential risk. At least I understood he has more faith in it than in making AI do our alignment homework. What does he mean by that? Increasing productivity? New drug development? Helping us get insights into new technology to develop? All of the above? I'd love to understand the ideas around that possible way out.
I have a very rich smart developer friend who knows a lot of influential people in SV. First employee of a unicorn, he retired from work after a very successful IPO and now it’s just finding interesting startups to invest in. He had never heard of lesswrong when I mentioned it and is not familiar with AI research.
If anyone can point me to a way to present AGI safety to him to maybe turn his interest to invest his resources in the field, that might be helpful
booked a call!
Will do. Merci!
Is there a way "regular" people can "help"? I'm a serial entrepreneur in my late 30s. I went through 80000 hours and they told me they would not coach me as my profile was not interesting. This was back in 2018 though.
I believe 80000 hours has a lot more coaching capacity now, it might be worth asking again!
If Eliezer is pretty much convinced we're doomed, what is he up to?
I'm not sure how literally to take this, given that it comes from an April Fools Day post, but consider this excerpt from Q1 of MIRI announces new "Death With Dignity" strategy.
...That said, I fought hardest while it looked like we were in the more sloped region of the logistic success curve, when our survival probability seemed more around the 50% range; I borrowed against my future to do that, and burned myself out to some degree. That was a deliberate choice, which I don't regret now; it was worth trying, I would not have wanted to die having not tried,
You are correct Willa! I am probably the Pareto best in a couple of things. I have a pretty good life all things considered. This post is my attempt to take it further, and your perspective is appreciated.
I tried going to EA groups in person and felt uncomfortable, if only because everyone was half my age or less. Good thing the internet fixes this problem, hence me writing this post.
Will join the discord servers and send you a pm! Will check out Guild of the Rose.
Opened a blog as well and will be trying to write, which from what I've read a gazillion times, is the best way to improve your thinking.
Merci for your message!
Sent you a pm!
Bonjour !
Been reading lesswrong for years but never posted: I feel like my cognitive capacities are nowhere near the average in this forum.
I would love to exchange ideas and try to improve my rationality with less “advanced” people, wondering if anyone would have recommendations.
Been thinking that something like the changemyview subreddit might be a good start?
Thanks
Bienvenue!
"I feel like my cognitive capacities are nowhere near the average in this forum."
Why do you feel that? I like to push back against such framing of cognitive capacities or capabilities generally, and instead frame those things as "where on the pareto frontier for some combination of skills are my capabilities?" My view here is heavily influenced by johnswentworth's excellent post on the topic and what I've read from the history of science, innovation, etc. (Jason Crawford's Progress Studies works are great, check them out)
Besides my pushing back a...
Thank you for taking the time to reply. I had to read your comment multiple times, still not sure if I got what you wanted to say. What I got from it:
a) Ideology is not the most efficient method to find out what the world is
b) Ideology is not the most efficient method to find out what the would ought to be
Correct?
You ask if biased solutions are a good or a bad thing. I thought biases were generally identified by rationality as bad things in general, is this correct?
We should hence strive to live and act as ideology-free as possible. Correct?
My current job is to develop PoCs and iterate over user feedback. It is a lot of basic and boilerplate. I am handling three projects at the same time when before cursor I would have been managing one and taking longer. I suck at UI and cursor simply solved this for me. We have shipped one of the tools and are finalizing shipping the second one, but they are LLM wrappers indeed designed to summarize or analyze text for customer support purposes and GTR related stuff. The UI iteration however has immensely helped and accelerated.