All of eternal_neophyte's Comments + Replies

it's not a very obvious example

I honestly regret that I didn't make it as clear as I possibly could the first time around, but expressing original, partially developed ideas is not the same thing as reciting facts about well-understood concepts that have been explained and re-explained many times. Flippancy is needlessly hostile.

there are some problems to which search is inapplicable, owing to the lack of a well-defined search space

If not wholly inapplicable, then not performant, yes. Though the problem isn't that the search-space is not def... (read more)

for more abstract domains, it's harder to define a criterion (or set of criteria) that we want our optimizer to satisfy

Yes.

But there's a significant difference between choosing an objective function and "defining your search space" (whatever that means), and the latter concept doesn't have much use as far as I can see.

If you don't know what it means, how do you know that it's significantly different from choosing an "objective function" and why do you feel comfortable in making a judgment about whether or ... (read more)

4dxu
Because words tend to mean things, and when you use the phrase "define a search space", the typical meaning of those words does not bring to mind the same concept as the phrase "choose an objective function". (And the concept it does bring to mind is not very useful, as I described in the grandparent comment.) Now, perhaps your contention is that these two phrases ought to bring to mind the same concept. I'd argue that this is unrealistic, but fine; it serves no purpose to argue whether I think you used the right phrase when you did, in fact, clarify what you meant later on: All right, I'm happy to accept this as an example of defining (or "inducing") a search space, though I would maintain that it's not a very obvious example (and I think you would agree, considering that you prefixed it with "in a looser sense"). But then it's not at all obvious what your original objection to the article is! To quote your initial comment: Taken at face value, this seems to be an argument that the original article overstates the importance of search-based techniques (and potentially other optimization techniques as well), because there are some problems to which search is inapplicable, owing to the lack of a well-defined search space. This is a meaningful objection to make, even though I happen to think it's untrue (for reasons described in the grandparent comment). But if by "lack of a well-defined search space" you actually mean "the lack of a good objective function", then it's not clear to me where you think the article errs. Not having a good objective function for some domains certainly presents an obstacle, but this is not an issue with search-based optimization techniques; it's simply a consequence of the fact that you're dealing with an ill-posed problem. Since the article makes no claims about ill-posed problems, this does not seem like a salient objection.

So is brainf*ck, & like NNs bf programs are simple in the sense of being trivial to enumerate and hence search through. Defining a search space for a complex domain is equivalent to defining a subspace of BF programs or NNs which could and probably does have a highly convoluted, warped separating surface. In the context of deep learning your ability to approximate that surface is limited by your ability to encode it as a loss function.

2dxu
The task of locating points in such subspaces is what optimization algorithms (including search algorithms) are meant to address. The goal isn't to "define" your search space in such a way that only useful solutions to the problem are included (if you could do that, you wouldn't have a problem in the first place!); the point is to have a search space general enough to encompass all possible solutions, and then converge on useful solutions using some kind of optimization. EDIT: There is an analogue in machine learning to the kind of problem you seemed to be gesturing at when you mentioned "more complex domains"--namely, the problem of how to choose a good objective function to optimize. It's true that for more abstract domains, it's harder to define a criterion (or set of criteria) that we want our optimizer to satisfy, and this is (to a first approximation) a large part of the AI alignment problem. But there's a significant difference between choosing an objective function and "defining your search space" (whatever that means), and the latter concept doesn't have much use as far as I can see.

It only makes sense to talk about "search" in the context of a *search space*; and all extent search algorithms / learning methods involve searching through a comparatively simple space of structures, such as the space of weights on a deep neural network or the space of board-states in Go and Chess. Defining these spaces is pretty trivial. As we move on to attack more complex domains, such as abstract mathematics, or philosophy or procedurally generated music or literature which stands comparison to the best products of human genius, the problem of even /defining/ the search space in which you intend to leverage search-based techniques becomes massively involved.

4dxu
Since deep neural networks are known to be Turing-complete, I don't think it's appropriate to characterize them as a "comparatively simple" search space (unless of course you hold that "more complex domains" such as abstract mathematics, philosophy, music, literature, etc. are actually uncomputable).

The strength of the claim being made by Slashdot and the lack of any examination of ways in which it could be false by whoever wrote Slashdot's summary both invite skepticism.

I'm of the opinion that we are in base reality regardless, though. The reason for this being is that the incentive for running a simulation is so that you can observe the behavior of the system being simulated. If you have some vertical stack of simulations all simulating intelligent agents in a virtual world, and most of these simulations are simulating basically the same thing, that... (read more)

I liked Nietzche's framing of the question in terms of infinite recurrence better. Strangely I would forgo infinite recurrence but would choose the second option in your scenario ( since if it turns out to be a mistake the cost will be limited ).

0WalterL
I dunno, it might well be infinite. If God makes your life happen again, then it presumably includes his appearance at the end. Ergo you make the same choice and so on.

The connection between neuroses and memories was something that made me think a lot. I've been trying to provoke myself into some kind of "transformation" for about 10 years, with some limited successes and a lot of failures for a want of insight. Information like this is really valuable so thank you for sharing your experience.

Given that world GDP growth continues for at least another century, 100%. :)

It is impossible for one to act on another's utility function (without first incorporating it into their own utility function).

This seems tautological and trivially so. Whatever utility function you act on becomes by virtue of that fact "your" utility function.

these laws are exactly the outside world

That is my view precisely. One way out is to assert that there is at least one mind responsible for providing the percepts available to other minds, and from its perspective nothing is unknown and it fills the function of the "outside world".

The panpsychism argument is probably the most compelling one among all of these. The problem with it is that if percepts are the basic substance of the universe howcome we have experiences that we cannot predict? It implies our future experiences are determined by something outside of our own minds.

0cousin_it
Or that our minds define a probability distribution over future experiences.
0turchin
One way to answer it is to turn to the solipsistic way - that is, there is no outside universe, but there are laws which convert one experience into the next one. I would not try to defend the point, as it has one clear weakness: it is not parsimonious, as it requires extremely complex laws to convert one experience in the next, and, more over, these laws are exactly the outside world, after some normalisation.

I don't know a whole lot about physics or the other subjects he talks about. It just seems very well-argued to me.

These two facts are related.

2TheAncientGeek
That is the kind of snark that is entirely justified.

Those are a lot of links to sift through though - can you give an example of just one? :)

0Fivehundred
Many are given in the words themselves, so I don't see why you're asking. The laser between posts?

Let's assume all the arguments linked are in fact sound. First obvious question is does he offer anything that resembles a falsifiability condition? If not then he doesn't present anything remarkable or particularly difficult to dispatch with since his is a scientific, material hypothesis.

0Fivehundred
Nearly every link provides falsifiable claims, although some are difficult to test.

A section of three dimensional space can be modelled as a cubic grid with nodes where the edges intersect, up to some limited resolution for a cube of finite volume ( and I suppose the same holds true with more than three dimensions ). It sounds as if you're proposing this graph basically be flattened - you take a fully connected regular polygon of n^3 angles, map the nodes in your cube to your polygon and then delete all edges in the connected polygon that don't correspond to an edge present in the cube.

I have further questions but they hinge on whether or not I've understood you correctly., Is the above so far a fair summary?

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real. Imagine a Scientologist offering to explain to you why Scientology isn't a cult.

Of the people I know of who are outright hostile to LW, it's mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can't exactly say that those things don't exist.

0Adam Zerner
I could see some people responding that way. But I could see others responding with, "oh, ok - that makes sense". Or maybe, "hm, I can't tell whether this is legit - let me look into it further". There are lots of citations and references in the LessWrong writings, so it's hard to argue with the fact that it's heavily based off of existing science. Still, there is the risk of some people just responding with, "Jeez, this guy is getting defensive already. I'm skeptical. This LessWrong stuff is not for me." I see that directly addressing a concern can signal bad things and cause this reaction, but for whatever reason, my brain is producing a feeling that this sort of reaction will be the minority in this context (in other contexts, I could see the pattern being more harmful). I'm starting to feel less confident in that, though. I have to be careful not to Typical Mind here. I have an issue with Typical Minding too much, and know I need to look out for it. The good thing is that user research could totally answer this question. Maybe that'd be a good activity for a meet-up group or something. Maybe I'll give it a go.

Thank you for being gracious about accepting the criticism.

0Adam Zerner
:)

While I feel I technically speaking ought to be applauding any effort to boost the tollerance of heterodox opinions in universities, my heart would not be in it. I think the issue is that many of the most vicious "political types" are the ones with the weakest knowledge about the history and provenance of their own ideas. How many ultra-feminists have ever so much as opened "The Feminine Mystique"? The Feminine Mystique is not even talked about or refferenced in discussions on Feminism I've come across. How many "Marxists" ev... (read more)

"actually, X" is never a good way to sell anything. Scientists are quite prone to this kind of speech which from their perspective is fully justified ( because they've exhaustively studied a certain topic ) - but what the average person hears is the "you don't know what you're talking about" half of the implication which makes them deaf to the "I do know what I'm talking about" half. If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they'll adjust t... (read more)

1Adam Zerner
My impression: a major issue is that other people get the idea that LessWrong comes from a few people preaching their ideas, when in reality, it's people who mostly preach the ideas that have been discovered by and are widely agreed upon by academic experts. Just saying, "it comes from academics" seems to not directly address this major issue directly enough. That said, I see what you mean about "actually, X" being a pattern that may lead people to instinctively argue the other way. So I see that there is a cost, but my impression is that the cost doesn't outweigh the benefit that comes with directly addressing a major concern that others have. For most audiences; there are certainly some less charitable audiences who need to be approached more gently. I'd consider my confidence in this to be moderate. Getting your data point has lead to me shift downwards a bit.
0Lumifer
Behold LW! :-)

He's really, really smart.

This is the kind of phrasing that usually costs more to say than you can purchase with it. Anyone who is themselves really, really smart is going to raise hackles at this kind of talk; and is going to want strong evidence moreover ( and since a smart person would independently form the same judgement about Yudkowsky, if it is correct, you can safely just supply the evidence without the attached value judgment ).

Fiction authors have a fairly robust rule of thumb: show, don't tell. Especially don't tell me what judgement to for... (read more)

1Adam Zerner
Thanks for calling this out. I was imagining explaining it to a friend or family member who is at least somewhat charitable and trusting of my judgement. In that case, I expect them to not raise hackles, and I think it's useful to communicate that I think the authors are particularly smart. However, if this were something that were posted on Less Wrong's About page, for example, I could definitely see how this would turn newcomers away, and I agree with you. Self-promoting as "really, really smart" definitely does seem like something that turns people off and makes them skeptical.

It's not smoking-gun obvious to me that this second formulation is what the pre-Pauline Christians believed in. Jesus's divinity certainly wasn't settled even after Paul. Consider for example the Arian "heresy".

Paul isn't going to cut it

Paul might cut it if you're Thomas Jeffson: https://en.wikipedia.org/wiki/Jefferson_Bible "Paul was the first corrupter of the doctrines of Jesus."

0Lumifer
Not for the change of mind from "I completely don't care about humans" to "I'll make my only Son a human (to start with) and let other humans crucify him so that they could wash off the original sin".

God says "j/k, just kidding"

Either God, Jesus or St. Paul - that all depends entirely on which sect you ask.

0Lumifer
Got to be someone from the Holy Trinity -- Paul isn't going to cut it.

therefore God optimizes this world for Leviathan

?

My own reading of Job was not that god's goodness is undeniable, it's that god really needs nothing from us and is entirely indifferent to human beings choosing to damn themselves or not, in contradiction to "your God is a jealous God".

If you have sinned, what do you accomplish against him? And if your transgressions are multiplied, what do you do to him? If you are righteous, what do you give to him? Or what does he receive from your hand? Your wickedness concerns a man like yourself, and your righteousness a son of man.

This seems to me lik... (read more)

0Lumifer
And then God says "j/k, just kidding" and does the whole New Testament thing :-)
0Viliam
My reading of Job is that Leviathan is more awesome than humans, and Job is forced to admit this, therefore God optimizes this world for Leviathan instead of humans. It's not that humans are completely irrelevant; but they are merely a rounding error compared with Leviathan, the utility monster.

They usually don't have any way to leverage their models to increase the cost of not buying their product or service though; so such a situation is still missing at least one criterion.

There is a complication involved since its possible to increase the cost to others of not doing business with you in "fair" ways. E.g. the invention of the fax machine reduced effective demand for message boys to run between office buildings, hence increasing their cost and the operating costs of anyone who refused to buy a fax machine.

Though I don't believe any c... (read more)

5evand
Modern social networks and messaging networks would seem to be a strong counterexample. Any software with both network effects and intentional lock-in mechanisms, really. And honestly, calling such products a blend of extortion and trade seems intuitively about right. To try to get at the extortion / trade distinction a bit better: Schelling gives us definitions of promises and threats, and also observes there are things that are a blend of the two. The blend is actually fairly common! I expect there's something analogous with extortion and trade: you can probably come up with pure examples of both, but in practice a lot of examples will be a blend. And a lot of the 'things we want to allow' will look like 'mostly trade with a dash of extortion' or 'mostly trade but both sides also seem to be doing some extortion'.

This is the first time I've heard of this dilemma (so this post is really just thinking aloud). It seems to me that trade usually doesn't require agents to engage in deep modeling of each other's behaviour. If I go down to the market place and offer the man at the stall £5 for a pair of shoes, and he declines and I walk away - the furthest thing from my mind is trying to step through my model of human behaviour to figure out how to persuade him to accept. I had a simple model - to wit that the £5 was sufficient incentive to effect the trade - and when that... (read more)

1Stuart_Armstrong
What of companies that spend millions analysing markets before setting their prices? That seems to involve deep modelling, yet is canonically seen as trade.

I may have misunderstood your argument Thomas. Are you saying that because it's possible to construct a paradox ( in this case Yablo's paradox ) using an infinitude, that the concept of infinity is itself paradoxical?

Couldn't you make a similar argument about finite systems such as, say?:

A: B is false, B: A is false

Here are only two sentences. Is the number two therefore paradoxical? I apologize if it sounds like I'm trying to parody your argument - I really would like to learn that I've misunderstood it, and in what way.

0Thomas
I have to admit, that no. Perhaps it is something else wrong here. It might be. Not very likely, but still possible. We see, that finite lists don't suffer because of Yablo's paradox, infinite lists do. Still, it may be something else which really causes the A & NOT A situation.. You can. And you have a contradictory little system here. Which is therefore useless. So you take another one, like the rules for the game of chess. Hoping, that there is no paradox there. (But there is. You can't believe it, but there is a paradox inside the chess rules!) In practice, you can continue to play chess and use infinities. But that is at your own risk.

"you can't do infinite number of steps in a finite time"

Well, can you? If some finite period must elapse when a finite distance is covered, an an infinite distance is greater than any finite distance, then the period of time elapsed in crossing an infinite segment must be greater than the period that elapses for crossing any finite segment, and thus also infinite.

I suppose you can also assume that you can cross a finite segment without a finite period of time elapsing - but then what's to prevent any finite segment of arbitrary length being crossed instantaneously?

0Thomas
What seemed the infinite number of steps to Zeno (and pretty much to everybody else), may be only some finite number of Planck's lengths to cross in some finite number of Planck's time units. (In the fifth century A.D., Indian mathematicians reconciled the doubts of Zeno, using infinite series and those solutions officially still hold. If you want to keep the infinitely divisible space and time, you may do it their way.)
0Thomas
He proved, that there is ALWAYS either at least one "A & ~A" (and therefore many) - either an unprovable theorem exists. Inside all those systems, which contain the "standard calculus"! He didn't prove an actual "A & ~A", but that one always exists, if there are no unprovable theorems in those "standard calculus systems".

societies of brain augments that are all working together

Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.

The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.

0whpearson
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.

The first two would suggest I'm a subject-matter expert

Why? Are the two or three most vocal critics of evolution also experts? Does the fact that newspapers quote Michio Kaku or Bill Nye on the dangers of global warming make them climatology experts?

The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.

That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is... (read more)

0whpearson
I think I am unsure what properties of future tech you think will lead to more MAD style situations than we have currently. Is it hard takeoff?

Privately manufactured bombs are common enough to be a problem - and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt - they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.

I'd say it's more useful to think of power in terms of things you can do with a reasonable chance of getting away with it... (read more)

0whpearson
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking. There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well). This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.

even with increasing power

At the individual level? By what metric?

these do not seem the correct things for maths to be trying to tackle

Is that a result of mathematics or of philosophy? :P

0whpearson
Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.

For this tactic to be effectual it requires that a society of augmented human brains will converge on a pattern of aggregate behaviours that maximize some idea of humanity's collective values or at least doesn't optimize anything that is counter to such an idea. If the degree to which human values can vary between _un_augmented brains reflects some difference between them that would be infeasible to change then it's not likely that a society of augmented minds would be any more coordinated in values that a society of augmented ones.

In one sense I do believ... (read more)

1whpearson
We have been gradually getting more peaceful, even with increasing power. So I think there is an argument that brain augmentation is like literacy and so could increase that trend. A lot depends on how hard a take off is possible. I like maths. I like maths safely in the theoretical world, occasionally bought out to bear on select problems that have proven to be amenable to it. Also I've worked with computers enough to know that maths is not enough. They are imperfectly modeled physical systems. I really don't like maths trying to be in charge of everything in the world, dealing with knotty problems of philosophy. Question like what is a human, what is life, what is a humans value; these do not seem the correct things for maths to be trying to tackle.

Most scientists are not extropian in any sense - so if they have been "prepping the party" it was not deliberate. Are you considering scientists and religious folk as disjoint sets?

Perhaps Elon doesn't believe we are I/O bound, but that he is I/O bound. ;]

There's a more serious problem which I've not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff.

  • haven't read the linked article through yet
2whpearson
I think most people interested in IA want to make it that there will be a large number of humans using IA at once taking off as a group and policing each other, so in aggregate things go okay. It would be madness to rely on a 1 or a small set of humans to take off and rule us. So the question becomes whether this scenario is better or worse than having an single AI using a goal system based off a highly abstract theorising of overclocked baboon brains to control the future?

by Wikipedia

Well, by people who edit there and may be hostile to either rationlists, NRXers or both. Luckily most people I've talked to online will react with bafflement or besument if Wikipedia is cited as a source for anything - so people are in my experience pretty well innoculised against the appeal to authority trap that Wikipedia creates.

7Viliam
I am afraid that many people, for example journalists, are not. In my experience, they quote Wikipedia and each other without hesitation. This is how citogenesis happens: First David Gerard writes something on RationalWiki. Then a random journalist finds it and writes about it in an article. Then another journalist finds it in RationalWiki and the article, and writes about it in another article. Then more journalists join. Then David Gerard makes sure all these articles are linked from the Wikipedia page as "reliable sources", and that all other non-essential information about LessWrong is removed. Then more journalists find it in Wikipedia and other articles, etc. And then, I am afraid that even people who generally take Wikipedia with a grain of salt will go: "Come on, Wikipedia says X, RationalWiki says X, Newspaper1 says X, Newspaper2 says X, Newspaper3 says X-... Newspaper99 says X -- now either this is a huge world-wide conspiracy against Less Wrong, or Less Wrong really is an evil cult of neoreactionary basilisk worshippers... and I don't really believe in worldwide conspiracies against a website no one really cares about". Unfortunately, PR works, and we have some dedicated anti-PR volunteers. Maybe just two of them, but at least one of them knows how to start an avalanche, and is working on this for years. (Yeah, some people should get a life. Unfortunately, this is not my decision to make.)

That's more of a function of the way you code up the running processes

Well not necessarily, depending on what kind of transforms you can apply to the source before feeding it to the interpreter, and the degree of fuss you're willing to put up with in terms of defining global functions with special names to handle resurrection of state and so on.

Python wasn't picked specifically because it's ideal for doing this kind of thing but just because it's easy for hacking prototypes together and useful for many things. At the risk of overstating my progress - ... (read more)

If I may take a stab at this: it's probably a combination of 1) Costs a lot 2) Benefit isn't expected for many decades 3) No guarantee that it would work

Anyone taking a heurisitc approach to reasoning about whether to sign up for cryonics rather than a probabilistic one ( which isn't irrational if you have no way to estimate the probabilities involved available to you ) could therefore easily evaluate it as not worth doing.

0gilch
The religious might also see it as an attempt to cheat God, which rarely ends well in the mythology.

Edit: lighttable is also very close to what I would consider a good operating environment.

[This comment is no longer endorsed by its author]Reply

For one thing I need to be able to run it on on a server without x-windows on it; so I need to be able to change code on my own machine, have a script upload it to the remote server and update the running code without halting any running processes. I also need the input source code to be transformed so every variable assignment, function-call or generator call is wrapped in a logging function which can be switched on or off, and for the output of the logs to be viewable by something basically resembling an Excel spreadsheet, where rows and columns can be f... (read more)

0gilch
Lively Kernel is a Smalltalk-like environment that runs in the browser. It might be better for that server-side stuff than normal Smalltalk. Unfortunately, it's written in JavaScript, which is not a good language, but I think it can also compile ClojureScript, which is much better. Cloxp is a related project that's more Clojure-based. Amber Smalltalk also runs in-browser. The maintainer has kind of gone off in a weird direction, but it still works. PharoJS was supposed to be an alternative with a different approach, but I'm not sure if it was ever completed. Emacs is the closest Lisp has to a Smalltalk environment. There are emacsen written in other lisps, like EdWin and Hemlock. These can be run in a terminal over ssh.
0username2
Sounds like you need to look into erlang.
0Lumifer
That's more a function of the way you code up the running processes and less a function of the language involved, but I suspect that systems where a language can be combined with, essentially, a VM and an OS (LISP, Smalltalk) can make that a lot easier. Wrangling Python to become LISP is going to be... quite an exercise. But if all you need is to change the values of some variables and/or what some functions so, that looks doable. So basically you want to run your code inside a debugger?
0eternal_neophyte
Edit: lighttable is also very close to what I would consider a good operating environment.

I'm focusing on something highly specific right now: a dirty, hack-riddled attempt at turning python into a usable live-programming environment. This is a far cry from my general interest of building an OS ( or "operating environment" ) which effectively has a reflective understanding of its own internals, programming language and operations.

This splits into many sub-problems; so the one I'm primarily fascinated by is how to construct a programming language that a computer not only can execute but understands the semantics of in various dimension... (read more)

0Lumifer
Better than ipython/Jupyter in which way? Something like a Smalltalk environment?

Can we post even if what we care about is secret? :)

I care about finding ways to turn a desktop computer into an a better auxiliary organ of the brain.

1whpearson
I am also interested in that. I'm interested if you are trying to solve a single problem or whether you are trying to do something more general.

No problem, good luck with all you do.

allowing users to immediately save data onto their own devices

Aye. Not that I'd recommend doing it that way but I was basically just curious to see if JS could manage it.

dynamically change what the user sees

If you store information about the schedule they've set up in a cookie then yes - but I imagine it would be a lot of info for a cookie. If you intend to let users create or edit a schedule, close the tab and then come back to it later, you'll probably want to implement that using backend server stuff ( ses... (read more)

0[anonymous]
Great, thank you for all the help!

Ah, I confused myself because I thought you were referring to the neo-right French Identitarian youth movement: https://www.generation-identitaire.com/

If you have a decent grasp of python then https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world is a very good resource.

This is the book that got me started with python: http://www.diveintopython3.net/

If you end up going down the Python road and your project grows to the point where you feel you would like help, I'd be very interested in contributing to projects of this kind.

tailored to this mini-project

Possibly this: http://exploreflask.com/en/latest/static.html

Though I've done a bit of googling and it's apparent that you can... (read more)

0[anonymous]
Thanks for the links! I checked out the SO link and a few of the things it linked to (like FileSaverJS) because that seemed most applicable / implementable. It looks like the benefit here is allowing users to immediately save data onto their own devices [is that correct?] From there, I'm guessing there would be ways to dynamically change what the user sees depending on what's been downloaded (so we could revisit the schedule even if the tab closes)?

a lot of the tools of criticism of institutions and ideologies are largely the same between the two

Do you have a specific example of that?

1tristanm
From the linked article: I mean when the author points out that many of the mainstream left's thought leaders draw directly from Foucault, she isn't wrong. That's just noticing something that's easily verifiable. I just think she's missing a facet of how non-academic people have taken it and run with it, which draws on a larger philosophical heritage.
Load More