All of wachichornia's Comments + Replies

Answer by wachichornia113

My current job is to develop PoCs and iterate over user feedback. It is a lot of basic and boilerplate. I am handling three projects at the same time when before cursor I would have been managing one and taking longer. I suck at UI and cursor simply solved this for me. We have shipped one of the tools and are finalizing shipping the second one, but they are LLM wrappers indeed designed to summarize or analyze text for customer support purposes and GTR related stuff. The UI iteration however has immensely helped and accelerated. 

I think he’s talking about coast disease? 

https://en.m.wikipedia.org/wiki/Baumol_effect

Are there any plans for an update? One year on, do the ideas discussed still apply?

I also started doing something similar. I’ve thought about rolling over every 6 months in case a black swan flash crashes the value of the options at the time of exercising/selling. Any thoughts on this?

4Zach Stein-Perlman
If bid-ask spreads are large, consider doing so less often + holding calls that expire at different times so that every time you roll you're only rolling half of your calls.

Has Lecun explained anywhere how does he intend to be able to keep the guardrails on open source systems?

I modified part of my portfolio to resemble the summarized takeaway. I'm up 30(!?!) % in less than 4 months.

Could a basic version of this that could help many people with their reasoning easily be set up as a GPT?

I tried it:

https://chat.openai.com/g/g-x4ryeyyCd-rationalist-dojo

But still unhappy with what I am getting. If you have a good prompt to find inconsistencies in your reasoning, please share it!

3lsusr
I tried that too. It didn't work on my first ~1 hour attempt.
Answer by wachichornia91

I used to visit every day since 2018 and find one or two interesting articles to read on all kinds of topics.

For the past few months I just read zvi’s stuff and any AI related not too technical articles.

Some Reddit forums have dedicated days to topics. I don’t know if having AI stuff only a few days a week would help restore the balance haha.

5Raemon
Note you can use the tag-filters to filter out AI or otherwise adjust the topics in your Latest feed.
2Bird Concept
Yeah, that reminds me of this thread https://www.lesswrong.com/posts/P32AuYu9MqM2ejKKY/so-geez-there-s-a-lot-of-ai-content-these-days

I asked ChatGPT to explain the image, and it pulled a Westworld "Doesn't look like anything to me" reply on the "language model hallucinations are lies" box:

This image is a play on the concept of "alignment charts," which are often used in role-playing games to define character behavior. Here, it classifies different types of lies based on two axes: content (what is being said) and structure (the nature of the lie).

1. **Content purist** vs. **Content rebel**: This axis distinguishes lies based on whether the statement itself is denotatively false (purist) ... (read more)

Did you count calories? Did you try to keep the same amount of calories of the replaced meals, but with potatoes?

2CuoreDiVetro
Good question. As Portia says, I didn't. The whole point of this is to not use willpower, so restricting calories when you feel like eating goes against that. I didn't measure, but I'm willing to bet that how it works is that this diet makes me eat fewer calories without actively trying to eat fewer calories. What I tracked, was only things which were "easy" to track, for example how many meals (light, medium, heavy), how many "snacks", etc. Super imprecise measurements, what was really superizing in the end is despite that, how high and R^2 I could get on my linear model next day (or next few days) weight prediction. Will talk about this more in detail hopefully in a future post. 
3Portia
That would only be meaningful if OP had accurately weighed and tracked the food, which is enough of a hassle that this would have been mentioned, I think. And without it... you would naturally assume that OP consumed fewer calories, because a significant part of their diet was now a highly satiating low calorie food with resistant starch. That would definitely be my guess.

There’s an app called garden where you enter the name of the people you care about and how often you want to talk to them: once a week, a month etc.

I started using it and being open to people about it. A few mentioned it sounded a bit weird but otherwise I’ve gotten overwhelmingly positive feedback and I’m staying in touch regularly with the people I care about!

The “what I get/what they get from me” columns from this Dunbar exercise are a bit too much for me though.

1Benjamin R
Link? Edit: this seems to be it: https://apps.apple.com/us/app/garden-stay-in-touch/id1230466454
1[comment deleted]
2rjacobs
Same, I’ve been using https://clay.earth for this with good results. Biggest things I’ve noticed: * Taking quick notes after interactions with “I can get” and “I can provide” helps when trying to remember things before your next conversation * Clay will automatically update location and remind you about people without you needing to set reminders manually, which removes a lot of grunt work * People see through periodic “it’s been a while!” texts sent on the first of each month. Thoughtful gifts and things like holiday cards with a handwritten note go a long way toward making things feel intentional.

Got it. Seems to me that it only works on liquid markets right? If the spread is significant you pay much more than what you can sell it for and hence do not get the .09 difference?

Would you have a link to a resource that would help understand that 9% you mention on this comment? How does it work? What shares should be bought in order to have been able to take advantage of this trade? Thanks

2NunoSempere
I don't have a link off the top of my head, but the trade would have been to sell one share of yes for each market. You can do this by splitting $1 into a Yes and No share, and selling the Yes. Specifically in Polymarket you achieve this by adding and then withdrawing liquidity (for a specific type of markets called "amm', for "automatic market marker", which were the only ones supported by Polymarket at the time, though it since then also supports an order book).  By doing this, you earn $1.09 from the sale + $3 from the three events eventually, and the whole thing costs $4, so it's a guaranteed profit. So I guess that I was making a mistake when I said that there was a 9% in 1.5 months (there is a $4.09/$4, or a 2.25% return over 1.5 months, which is much worse).

I have followed your advice for over a year now, and have this note on my phone with a summary of the regime.

Gym routine

  • ~1-2 hour weightlifting sessions 2-3x a week. (A third weightlifting session is recommended for the first several months, for both gaining strength and building habits.)
  • ~15-40 minutes of vigorous cardio 2-3x a week.

Cardio: 

Very high intensity routines follow a pattern of a short warmup (5 minutes at a slow pace) followed by several bursts of 10-60 seconds all out intensity. (30 on 30 off for 10 intervals is popular and close to max... (read more)

Good point, I can briefly outline how the research on volume has informed how I lift these days.

It used to be believed that intensity was basically irreplaceable, but more and better studies have shown extremely similar effects from lower intensity, approximately down to 60-65% of your 1 rep max, whereas a 4 or 5 rep scheme is going to be around 80-85% your 1 rep max. So I tend to work the listed exercises in the 8-12 rep range. This further reduces injury risk. The exercise choices are good, and I also add in an accessory or two, defaulting to face pulls ... (read more)

I think it is a TED talk, just uploaded to the wrong channel.

8bayesed
Yeah, based on EY's previous tweets regarding this, it seemed like it was supposed to be a TED talk.

I asked GPT-4 to develop an end-of-the-world story based on how EY thinks it will go. Fed it several quotes from EY, asked to make it exciting and compelling, and after a few tweaks, this is what it came up with. I should mention that the name of the system was GPT-4's idea! Thoughts? 
 

Title: The Utopian Illusion

Dr. Kent and Dr. Yang stood before a captivated audience, eager to unveil their groundbreaking creation. "Ladies and gentlemen, distinguished members of the press," Dr. Kent began, "we are proud to introduce TUDKOWSKY: the Total Urban Det... (read more)

If I understood correctly, he mentions augmenting humans as a way out of the existential risk. At least I understood he has more faith in it than in making AI do our alignment homework. What does he mean by that? Increasing productivity? New drug development? Helping us get insights into new technology to develop? All of the above? I'd love to understand the ideas around that possible way out.

I have a very rich smart developer friend who knows a lot of influential people in SV. First employee of a unicorn, he retired from work after a very successful IPO and now it’s just finding interesting startups to invest in. He had never heard of lesswrong when I mentioned it and is not familiar with AI research.

If anyone can point me to a way to present AGI safety to him to maybe turn his interest to invest his resources in the field, that might be helpful

1Rachel Freedman
As an AI researcher, my favourite way to introduce other technical people to AI Alignment is Brian Christian’s book “The Alignment Problem” (particularly section 3). I like that it discusses specific pieces of work, with citations to the relevant papers, so that technical people can evaluate things for themselves as interested. It also doesn’t assume any prior AI safety familiarity from the reader (and brings you into it slowly, starting with mainstream bias concerns in modern-day AI).
1Yonatan Cale
My answer for myself is that I started practicing: I started talking to some friends about this, hoping to get better at presenting the topic (which is currently something I'm kind of afraid to do) (I also have other important goals like getting an actual inside view model of what's going on)   If you want something more generic, here's one idea: https://www.youtube.com/c/RobertMilesAI/featured
1Aditya
When I talk to my friends, I start with the alignment problem. I found this analogy to human evolution really drives home the point that it’s a hard problem. We aren’t close to solving it. https://youtu.be/bJLcIBixGj8 So at this time questions come up about how intelligence necessarily means morality. I talk about orthogonality thesis. Then why would the AI care about anything other that what it was explicitly told to do, the danger comes from Instrumental convergence. Finally people tend to say, we can never do it, they talk about spirituality, uniqueness of human intelligence. So I need to talk about evolution hill climbing to animal intelligence, how narrow ai has small models while we just need AGI to have a generalised world model. Brains are just electrochemical complex systems. It’s not magic. Talk about pathways, imagen, gpt3 and what it can do, talk about how scaling seems to be working. https://www.gwern.net/Scaling-hypothesis#why-does-pretraining-work So it makes sense we might have AGI in our lifetime and we have tons of money and brains working on building ai capability, fewer on safety. Try practising on other smart friends and develop your skill, you need to ensure people don’t get bored so you can’t use too much time. Use nice analogies. Have answers to frequent questions ready.

Is there a way "regular" people can "help"? I'm a serial entrepreneur in my late 30s. I went through 80000 hours and they told me they would not coach me as my profile was not interesting. This was back in 2018 though.

1Yonatan Cale
Easy answers:  You are probably over qualified (which is great!) for all sorts of important roles in EA, for example you could help the CEA or Lesswrong team, maybe as a manager? If your domain is around software, I invite you to talk to me directly. But if you're interested in AI direct work, 80k and AI Safety Support will probably have better ideas than me
1plex
We should talk! I have a bunch of alignment related projects on the go, and at least two that I'd like to start are somewhat bottlenecked on entrepreneurs, plus some of the currently in motion ones might be assistable. Also, sad to hear that 80k is discouraging people in this reference class. (seconding talk to AI Safety Support and the other suggestions)
6Chris_Leong
You may want to consider booking a call with AI Safety Support. I also recommend applying for the next iteration of the AGI safety fundamentals course or more generally just improving your knowledge of the issue even if you don't know what you're going to do yet.
4Adam Jermyn
Just brainstorming a few ways to contribute, assuming "regular" means "non-technical": * Can you work at a non-technical role at an org that works in this space? * Can you identify a gap in the existing orgs which would benefit from someone (e.g. you) founding a new org? * Can you identify a need that AI safety researchers have, then start a company to fill that need? Bonus points if this doesn't accelerate capabilities research. * Can you work on AI governance? My expectation is that coordination to avoid developing AGI is going to be really hard, but not impossible. More generally, if you really want to go this route I'd suggest trying to form an inside view of (1) the AI safety space and (2) a theory for how you can make positive change in that space. On the other hand, it is totally fine to work on other things. I'm not sure I would endorse moving from a job that's a great personal fit to something that's a much worse fit in AI safety.

I believe 80000 hours has a lot more coaching capacity now, it might be worth asking again!

If Eliezer is pretty much convinced we're doomed, what is he up to?

1Yonatan Cale
I think he's burned out and took a break to write a story (but I don't remember where this belief came from. Maybe I'm wrong? Maybe from here?)

I'm not sure how literally to take this, given that it comes from an April Fools Day post, but consider this excerpt from Q1 of MIRI announces new "Death With Dignity" strategy.

That said, I fought hardest while it looked like we were in the more sloped region of the logistic success curve, when our survival probability seemed more around the 50% range; I borrowed against my future to do that, and burned myself out to some degree. That was a deliberate choice, which I don't regret now; it was worth trying, I would not have wanted to die having not tried,

... (read more)

You are correct Willa! I am probably the Pareto best in a couple of things. I have a pretty good life all things considered. This post is my attempt to take it further, and your perspective is appreciated.

I tried going to EA groups in person and felt uncomfortable, if only because everyone was half my age or less. Good thing the internet fixes this problem, hence me writing this post.

Will join the discord servers and send you a pm! Will check out Guild of the Rose.

Opened a blog as well and will be trying to write, which from what I've read a gazillion times, is the best way to improve your thinking.

Merci for your message!

Bonjour !

Been reading lesswrong for years but never posted: I feel like my cognitive capacities are nowhere near the average in this forum.

I would love to exchange ideas and try to improve my rationality with less “advanced” people, wondering if anyone would have recommendations.

Been thinking that something like the changemyview subreddit might be a good start?

Thanks

Willa100

Bienvenue!

"I feel like my cognitive capacities are nowhere near the average in this forum."

Why do you feel that? I like to push back against such framing of cognitive capacities or capabilities generally, and instead frame those things as "where on the pareto frontier for some combination of skills are my capabilities?" My view here is heavily influenced by johnswentworth's excellent post on the topic and what I've read from the history of science, innovation, etc. (Jason Crawford's Progress Studies works are great, check them out)

Besides my pushing back a... (read more)

9Flakito
Same challenge here. The average level of the contributions on LW seems very high to me too. I struggle to find the right fit for me, the correct difficulty setting, half-way between the average "easy" and LW "god mode", haha

Thank you for taking the time to reply. I had to read your comment multiple times, still not sure if I got what you wanted to say. What I got from it:

a) Ideology is not the most efficient method to find out what the world is

b) Ideology is not the most efficient method to find out what the would ought to be

Correct?

You ask if biased solutions are a good or a bad thing. I thought biases were generally identified by rationality as bad things in general, is this correct?

We should hence strive to live and act as ideology-free as possible. Correct?

3Pattern
It depends on what you mean by ideology. I could have made this clearer by just asking this question and leaving it at that: I wrote my comment in a way that: 1. Presented "ideology" as meaning "dogma". 2. But also considered it a (degenerate case) of theory. I don't think it's bad to have theories, but if the way a relationship between a theory and reality is handled is that 'reality' is always rejected/ignored if they disagree, than that means learning is impossible. Is it bad to have theories? No. Yes, though it's useful to distinguish between 'here is a heuristic people use' and 'here is where that heuristic goes wrong'.