All of asr's Comments + Replies

asr20

It's a tempting thought. But I think it's hard to make the math work that way.

I have a lovely laptop here that I am going to give you. Suppose you assign some utility U to it. Now instead of giving you the laptop, I give you a lottery ticket or the like. With probability P I give you the laptop, and with probability 1 - P you get nothing. (The lottery drawing will happen immediately, so there's no time-preference aspect here.) What utility do you attach to the lottery ticket? The natural answer is P * U, and if you accept some reasonable assumptions about... (read more)

0Lumifer
If we are going into VNM utility, it is defined as the output of the utility function and the utility function is defined as returning real numbers.
0DataPacRat
I could try to rescue the idea by throwing in units, the way multiplying distance units by time units gives you speed units... but I'd just be trying to technobabble my way out of the corner. I think the most that I can try to rescue from this failed hunch is that some offbeat and unexpected part of mathematics might be able to be used to generate useful, non-obvious conclusions for utilitarian-style reasoning, in parallel with math based on gambling turning out to be useful for measuring confidence-strengths more generally. Anybody have any suggestions for such a subfield which won't make any actual mathematicians wince, should they read my story?
asr30

This is because the current position, direction, and speed of an atom (and all other measurements that can be done physically) are only possible with one and only one specific history of everything else in the universe.

This seems almost certainly false. You can measure those things to only finite precision -- there is a limit to the number of bits you can get out of such a measurement. Suppose you measure position and velocity to one part in a billion in each of three dimensions. That's only around 200 bits -- hardly enough to distinguish all possible universal histories.

1[anonymous]
I agree, I should have written a conditional: I will edit above. Other than our ability to measure these dimensions I think that their current state is only possible with only one history of the universe since the beginning.
asr30

Good point. A time limit of 3:54 does seem too arbitrary to be hard-coded.

Hrm. Maybe it's exactly one Atlantean time unit? Unsafe to assume that the units we are used to are the same units that the Stone's maker would find natural.

asr70

I bet Hermione is just going to love being the center of all the attention and scrutiny this will bring on her.

She came back from the dead. Gonna be a lot of attention and scrutiny regardless.

asr20

I have this impression - parenting hardly ever discussed on LW - that most of the community has no children.

Let me give you an alternate explanation. Being a parent is very time-consuming. It also tends to draw one's interest to different topics than are typically discussed here. In consequence, LW readers aren't a random sample of nerds or even of people in the general social orbit of the LW crowd. I would not draw any adverse inferences from the fact that a non-parenting-related internet forum tend to be depleted of parents.

0[anonymous]
This isn't adverse. I don't really care much what is the actual reasons of other people's reproductive choices, my point is simply that after having made them, and it is like 0 or 1 kids, it makes more sense to adapt to society than to change it. (BTW not actually that time consuming. Only if you believe fashionable bullshit that throwing gigantic amounts of attention to kids is necessary for them to turn into succesful and well-adjusted adults. In reality if every second evening is like "shut up and read a book and leave dad in peace to write one", that is not actually harmful in any way. 50% of the outcome is genes and 50% is outside-the-family environment.)
asr00

This graph would be more interesting and persuasive with a better caption.

asr60

data scientists / statisticians mostly need access to computing power, which is fairly cheap these days.

This is true for each marginal data scientist. But there's a catch, which is that those folks need data. Collecting and promulgating that data, in the application domains we care about, can sometimes be very costly. You might want to consider some of those as part of the cost for the data science.

For example, many countries are spending a huge amount of money on electronic health records, in part to allow better data mining. The health records aren'... (read more)

asr20

Um, yes for most definitions of "rational". That's why [autism] is considered a disability.

Hrm? A disability is a thing that is limits the disabled individual from a socially-recognized set of normal actions. The term 'disability' alone doesn't imply anything about reasoning or cognitive skills. It seems at best un-obvious, and more likely false, that "rationality" encompasses all cognitive functions.

Some people have dyslexia; that is certainly a cognitive disability. It would be strange (not to say offensive) to describe dyslexic i... (read more)

2alienist
Autism isn't just a behavior, it affects one's ability to reason socially, which in tern is "fed back" into other mental processes causing downstream problems, e.g., an inability to form models of social situations.
asr20

One of the unfortunate limitations of modern complexity theory is that a set of problems that look isomorphic sometimes have very different complexity properties. Another awkwardness is that worst-case complexity isn't a reliable guide to practical difficulty. "This sorta feels like a coloring problem" isn't enough to show it's intractable on the sort of instances we care about.

Separately, it's not actually clear to me whether complexity is good or bad news. If you think that predicting human desires and motivations is infeasible computationally,... (read more)

1JoshuaZ
The point I believe that 27chaos is trying to argue isn't that writing down the constraints would necessarily be hard (although it very likely is) but that trying to satisfy them may be tough.
asr10

I just observe that a lot of cosmology seems to be riding on the theory that the red shift is caused by an expanding universe.

This seems wrong to be. There's at least two independent lines of evidence for the Big Bang theory besides redshifts -- isotope abundances (particularly for light elements) and the cosmic background radiation.

What if it light just loses energy as it travels, so that the frequency shifts lower?

We would have to abandon our belief in energy conservation. And we would then wonder why energy seems to be conserved exactly in every ... (read more)

asr20

Speaking as a former algorithms-and-complexity TA --

Proving something is in NP is usually trivial, but probably would be worth a point or two. The people taking complexity at a top-tier school have generally mastered the art of partial credit and know to write down anything plausibly relevant that occurs to them.

solipsist110

I think roystgnr's comment was meant to be parsed as:

"Hmm... I can prove that this is in NP, and I can prove it is not in P and is not in NP-Complete. But that's not worth any points at all!" (crumples up and throws away paper)

Corollary:

...they shouldn't have crumpled that piece of paper.

asr50

What if it light just loses energy as it travels, so that the frequency shifts lower? That seems like a perfectly natural solution. How do we know it isn't true?

As gjm mentions, the general name for this sort of theory is "tired light." And these theories have been studied extensively and they are broken.

We have a very accurate, very well-tested theory that describes the way photons behave, quantum electrodynamics. It predicts that photons in the vacuum have a constant frequency and don't suddenly vanish. Nor do photons have any sort of inter... (read more)

asr50

"Falling in love" isn't this sudden thing that just happens, it's a process and it's a process that is assisted if the other person is encouraging and feels likewise. Put another way, when the object of your affection is uninterested, that's often a turnoff, and so one then looks elsewhere.

0Capla
I'm probably not a typical case (which is probably why I am confused), but this has not been my experience. I don't know if I've ever been "in love", given that "in love" is so sloppily and subjectively defined. I'm not clear on what's a crush and what's "in love." But suffice it to say that I have felt feelings for someone that someone who is less introspective (or philosophically careful) would likely label as being in love. When I am in such a state, the feelings of the other towards me are fairly irrelevant to my feelings towards her. I want the best for the object of my love and I sort of "melt" inside when I see her smile, but I don't want her to love me back, necessarily. I want her to be happy. It's not as if I'm not fishing for a relationship, and if that one isn't biting, I'll go find someone else to fall in love with. However, I have worked hard to develop myself emotionally, to forgive always, and to love unconditionally.
asr00

There is a peculiar consequence of this, pointed out by Cosma Shalizi. Suppose we have a deterministic physical system S, and we observe this system carefully over time. We are steadily gaining information about its microstates, and therefore by this definition, its entropy should be decreasing.

You might say, "the system isn't closed, because it is being observed." But consider the system "S plus the observer." Saying that entropy is nondecreasing over time seems to require that the observer is in doubt about its own microstates. What does that mean?

asr00

Russell is an entirely respectable and mainstream researcher, at one of the top CS departments. It's striking that he's now basically articulating something pretty close to the MIRI view. Can somebody comment on whether Russell has personally interacted with MIRI?

If MIRI's work played a role in convincing people like Russell, that seems like an major accomplishment and demonstration that they have arrived as part of the academic research community. If Russell came to that conclusion on his own, MIRI should still get a fair bit of praise for getting there f... (read more)

2Artaxerxes
The 3rd edition of Artificial Intelligence: A Modern Approach which came out in 2009, explains the intelligence explosion concept, cites Yudkowsky's 2008 paper Artificial intelligence as a positive and negative factor in global risk, and specifically mentions friendly AI and the challenges involved in creating it. So Russell has more or less agreed with MIRI on a lot of the key issues for quite some time now.
0Kaj_Sotala
He has.
2Jayson_Virissimo
His textbook from 2009 mentions Yudkowsky and Omohundro by name, so he very likely is familiar with MIRI's arguments.
asr340

Did the survey. Mischief managed.

asr80

Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits?

Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.

My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key sys... (read more)

4NxGenSentience
Asr, Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore. If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid. A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity. I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those "eyes only" critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being "looked at" or opportunistically copied in their limbo state between old and new encrypted status? Who can we trust to do all this conversion, even given the new algorithms are developed? This is actually almost intractably messy, at first glance.
asr50

Taking up on the "level above mine" comments -- Scott is a very talented and successful researcher. He also has tenure and can work on what he likes. The fact that he considers this sort of philosophical investigation worth his time and attention makes me upwardly revise my impression of how worthwhile the topic is.

asr50

Points 1 and 2 are reasonably clear. Point 3 is unhelpfully vague. If I were moderator, I would have no idea how far that pushes, and as a commenter I wouldn't have a lot of insight as to what to avoid.

I don't mind giving a catch-all authority to a moderator, but if there are specific things you have in mind that are to be avoided, it's probably better to enumerate them.

I would add an explicit "nothing illegal, nothing personally threatening" clause. Those haven't been problems, but it seems better to remind people and to make clear we all agree on that as a standard.

asr00

Interesting. Can you say more about how your work compares to existing VMs, such as the JVM, and what sorts of things you want to prove about executions?

3[anonymous]
Most commercially used VMs are not designed to the level of security required to protect humanity from an UFAI. VM escape bugs are routinely found in various JVM implementations and the complexity and structure of the VM itself precludes existing or theorized analysis tools from being able to verify the implementation to a level of detail sufficient for high-assurance systems. To be useful in this context, a VM needs to be designed with simplicity and security as the two driving requirements. One would expect, for example, an interpreter for such a VM to occupy no more than 1,000 lines of MISRA C, which could have its security assertions proven with existing tools like Coq. The goal is to get in the core VM the simplest set of opcodes and state transitions which still allow a nearly Turing-complete (total functional, to be specific) program description language, and one which is still capable of concisely representing useful programs. The other way in which it differs from many VM layers is that is has a non-optional strong typing system (in the spirit of Haskell, not C++). Type checking, mostly. Actually that aspect of the system has less to do with boxing than the design of the artificial intelligence itself. You can imagine, for example, a core evolutionary search algorithm which operated over program space by performing type-safe mutation of program elements, or achieved creative combination by substituting type-equal expressions. It is also important to prove some properties, e.g. bounds on running time. Particularly when interacting with other untrusted agents ("sharing minds" -- machine-to-machine communication is likely to be literally uploading memories from one mind-space to another).
asr00

Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.

It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.

asr20

Having a top-level domain doesn't make an entity a country. Lots of indisputably non-countries have top-level domains. Nobody thinks the Bailiwick of Guernsey is a country, and yet .gg exists.

2A1987dM
A couple days ago I did see an article somewhere calling Jersey a country, though.
3Eugine_Nier
Well, it's sufficiently independent of the UK to function as a tax haven. It's definitely one of those entities that's on the fuzzy boundary between country and non-country, along with Hong Kong and (in a slightly different way) Dubai.
asr00

To do that it's going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

I don't see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the "self-imp... (read more)

asr00

But it would have a very hard time strengthening its core logic, as Rice's Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.

This seems like the wrong conclusion to draw. Rice's theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn't follow that most optimizations are hard to prove. One imagines that software could do what humans do -- hu... (read more)

0[anonymous]
To do that it's going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
asr-10

You might look into all the work that's been done with Functional MRI analysis of the brain-- your post reminds me of that. The general technique of "watch the brain and see which regions have activity correlated with various mental states" is a well known technique, and well enough known that all sorts of limitations and statistical difficulties have been pointed out (see wikipedia for citations.)

-6[anonymous]
asr40

In other words, even if this is completely correct, it doesn't disprove relativity. Rather, it disproves either relativity or most versions of utilitarianism--pick one.

It seems like all it shows is that we ought to keep our utility functions Lorentz-invariant. Or, more generally, when we talk about consequentialist ethics, we should only consider consequences that don't depend on aspects of the observer that we consider irrelevant.

asr50

I'm curious if anyone has made substantial effort to reach a 'flow' state in tasks outside of coding, like reading or doing math etc etc., and what they learned. Are there easy tricks? Is it possible? Is flow just a buzzword that doesn't really mean anything?

I find reading is just about the easiest activity to get into that state with. I routinely get so absorbed in a book that I forget to move. And I think that's the experience of most readers. It's a little harder with programming actually, since there are all these pauses while I wait for things to compile or run, and all these times when I have to switch to a web browser to look something up. With reading, you can just keep turning pages.

asr110

The canonical example is that of a child who wants to steal a cookie. That child gets its morality mainly from its parents. The child strongly suspects that if it asks, all parents will indeed confirm that stealing cookies is wrong. So it decides not to ask, and happily steals the cookie.

I find this example confusing. I think what it shows is that children (humans?) aren't very moral. The reason the child steals instead of asking isn't anything to do with the child's subjective moral uncertainty -- it's that the penalty for stealing-before-asking is low... (read more)

asr40

Without talking about utility functions, we can't talk about expected utility maximization, so we can't define what it means to be ideally rational in the instrumental sense

I like this explanation of why utility-maximization matters for Eliezer's overarching argument. I hadn't noticed that before.

But it seems like utility functions are an unnecessarily strong assumption here. If I understand right, expected utility maximization and related theorems imply that if you have a complete preference over outcomes, and have probabilities that tell you how dec... (read more)

asr70

I appreciate you writing this way -- speaking for myself, I'm perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.

asr130

High frequency stock trading.

3FiftyTwo
What happens if mutliple agents have this ability? Does the impact of future knowledge cancel out or do we get some sort of weird hyper fast feedback loops?
4Peter Wildeford
See Primer (2004 film).
asr130

The attack that people are worrying about involves control of a majority of mining power, not control of a majority of mining output. So the seized bitcoins are irrelevant. The way the attack works is that the attacker would generate a forged chain of bitcoin blocks showing nonsense transactions or randomly dropping transactions that already happened. Because they control a majority of mining power, this forged chain would be the longest chain, and therefor a correct bitcoin implementation would try to follow it, with bad effects. This in turn would break the existing bitcoin network.

The government almost certainly has enough compute power to mount this attack if they want.

6jimrandomh
51% of hash power only grants the power to roll back recent transactions which you sent. It does not make it possible to enter invalid transactions, to roll back transactions you weren't party to, or to steal coins at rest. The risk is that you could receive coins, do something in response to receiving those coins, and then discover that they were clawed back. But the further back in time the transaction was, the more computationally expensive it is for them to do this.
0James_Miller
For how long would you have to control the computing power? Would having control of a massive number of computers for a few minutes be enough?
asr200

I didn't down-vote, but was tempted to. The original post seemed content-free. It felt like an attempt to start a dispute about definitions and not a very interesting one.

It had an additional flaw, which is that it presented its idea in isolation, without any context on what the author was thinking, or what sort of response the author wanted. It didn't feel like it raised a question or answered a question, and so it doesn't really contribute to any discussion.

-2AndyWood
Your brain was content free. You were projecting.
asr00

The only reasons I can think of are your #1 and #2. But I think both are perfectly good reasons to vote...

asr40

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

I don't follow your argument here. We have some function that maps from "levels of individual control" to happiness outcomes. We want to find the maximum of this fu... (read more)

0ygert
Hmm. To me it seemed intuitively clear that the function would be monotonic. In retrospect, this monotonicity assumption may have been unjustified. I'll have to think more about what sort of curve this function follows.
asr50

Eliezer thinks the phrase 'worst case analysis' should refer to the 'omega' case.

"Worst case analysis" is a standard term of art in computer science, that shows up as early as second-semester programming, and Eliezer will be better understood if he uses the standard term in the standard way.

A computer scientist would not describe the "omega" case as random -- if the input is correlated with the random number source in a way that is detectable by the algorithm, they're by definition not random.

2Eliezer Yudkowsky
I did not propose that worst case be interpreted as Omega or that it be given any nonstandard referent. I did suggest that "worst case" to describe the Adversary scenario is deceptive to readers, and we should ReplaceTheSymbolWithTheSubstance via a more descriptive phrase like "adversarial superintelligence that knows everything except the bits designated random". This is what the phrase standardly means in computer science, but calling this "worst case analysis" seems to me deceptive, especially if we're trying to conduct a meta-ish debate about the benefits of randomization, rather than talking about some particular algorithm.
5redlizard
Actually, in the context of randomized algorithms, I've always seen the term "worst case running time" refer to Oscar's case 6, and "worst-case expected running time" -- often somewhat misleadingly simplified to "expected running time" -- refer to Oscar's case 2. A system that reliably behaves like the omega case is clearly not random. However, a random system such as case 2 may still occasionally behave like omega, with probability epsilon, and it is not at all unreasonable or uncommon to require your algorithm to work efficiently even in those rare cases. Thus, one might optimize a random system by modelling it as an omega system, and demanding that it works well enough even in that context.
-2Vaniver
Agreed. Right. But I want to repeat the objection here that we often use pseudorandomness instead of actual randomness, and then the real worst case is that we've gotten a cursed seed. Somewhat less practically, in situations where a real adversary may have access to our hardware, we may have to assume that they can read (or write to!) our RNG.
asr20

Yes. Perhaps we might say, this is what middle school or high school science should be.

Likewise direct demonstrations are the sort of thing I wish science museums focused on more clearly. Often they have 75% of it, but the story of "this experiment shows X" gets lost in the "whoa, cool". I'm in favor of neat stuff, but I wish they explained better what insight the viewer should have.

asr00

Juries have a lot of "professional supervision." In the Common Law system, the judge restricts who can serve on the jury, determines the relevant law, tells the jury what specific question of fact they are deciding, controls the evidence shown to the jury, does the sentencing, and more. My impression is that the non-Common Law systems that use juries give them even less discretion. So when we have citizen-volunteers, we get good results only by very carefully hemming them in with professionals.

You can't supervise the executive in the same way. B... (read more)

asr30

I found this post hard to follow. It would be more intelligible if you gave a clearer explanation of what problem you are trying to solve. Why exactly is it bad to have the same people look for problems and fix them? Why is it bad to have a legislature that can revise and amend statutes during the voting process?

I also don't really understand what sort of comment or feedback you are expecting here. Do you want us to discuss whether this lottery-and-many-committees structure is in general a good idea? Do you want us to critique through the details of your ... (read more)

asr80

I basically agree, but I think the point is stronger if framed differently:

Some defects in an argument are decisive, and others are minor. In casual arguments, people who nitpick are often unclear both to themselves and to others whether their objections are to minor correctable details, or seriously undermine the claim in question.

My impression is that mathematicians, philosophers, and scientists are conscious of this distinction and routinely say things like "the paper is a little sloppy in stating the conclusions that were proved, but this can be f... (read more)

asr20

The idea that you can reasonable protect your anonymity by using a nickname is naive.

I think not so naive as all that. The effectiveness of a security measure depends on the threat. If your worry is "employers searching for my name or email address" then a pseudonym works fine. If your worry is "law enforcement checking whether a particular forum post was written by a particular suspect," then it's not so good. And if your worry is "they are wiretapping me or will search my computer", then the pseudonym is totally unhelpful... (read more)

2ChristianKl
Given the way the NSA works I would highly doubt that they don't check information in their databases when handing out a security clearance and run highly sophisticated computer analysis. The actual capabilities of those programs are going to be classified. The NSA doesn't want people to know about the capabilities they have. In addition the internet doesn't forget. NSA computer programs might not be good enough at the present to catch it but they might be in five years. Especially the whole Snowden episode encouraged the NSA to invest a lot more effort into gathering data about possible leakers and have computer programs that analyse the behavior of people with a security clearance.
asr00

This is incredibly cool and it makes me sad that I've never seen this experiment done in a science museum, physics instructional lab, or anywhere else.

2CronoDAS
My freshman physics professor showed us a video of a Cavendish experiment, but doing it live is problematic. You need a fairly controlled environment to pull it off because any air currents in the room will push the small weights around, and it also it takes a long time for the weights to move.
asr10

This is actually a really good example of what I wanted.

I think I have a lot of reason to believe v = f lambda -- It follows pretty much from the definition of "wave" and "wavelength". And I think I can check the frequency of my microwave without any direct assumptions about the speed of light, using an oscilloscope or somesuch.

asr00

But yes, you are correct, as long as your main criterion is something like "compelling at an emotional level", you should expect that different people understand it very differently.

This actually brings out something I had never thought about before. When I am reading or reviewing papers professionally, mostly the dispute between reviewers is about how interesting the topic is, not about whether the evidence is convincing. Likewise my impression about the history of physics is that mostly the professionals were in agreement about what would co... (read more)

0Douglas_Knight
Your observation of little conflict about whether the evidence is convincing could be explained by a consensus about whether it is convincing, but it could also be explained as low priority. That is my experience in math.
0Lumifer
That may be a peculiarity of physics (and math). Compare that to biology and medicine, not to mention social sciences. Well, of course, because "compelling at an emotional level" isn't really about evidence. Cute puppies are compelling at an emotional level. You're basically talking about getting a "proper" gut feeling, and that is very idiosyncratic.
asr00

Well, you could use your smartphone's accelerometer to verify the equations for centrifugal force, or its GPS to verify parts of special and general relativity, or the fact that its chip functions to verify parts of quantum mechanics.

These don't feel like the are quite comparable to each other. I do really trust the accelerometer to measure acceleration. If I take my phone on the merry-go-round and it says "1.2 G", I believe it. I trust my GPS to measure position. But I only take on faith that the GPS had to account for time dilation to work ... (read more)

0Lumifer
Otherwise known as the Typical Mind Fallacy :-) But yes, you are correct, as long as your main criterion is something like "compelling at an emotional level", you should expect that different people understand it very differently.
asr10

Another advantage of replicating the original discovery is that you don't accidentally use unverified equipment or discoveries (ie equipment dependent on laws that were unknown at the time).

I don't consider this an advantage. My goal is to find vivid and direct demonstrations of scientific truths, and so I am happy to use things that are commonplace today, like telephones, computers, cameras, or what-have-you.

That said, I certainly would be interested in hearing about cases where there's something easy to see today that used to be hard -- is there something you have in mind?

0christopherj
Well, you could use your smartphone's accelerometer to verify the equations for centrifugal force, or its GPS to verify parts of special and general relativity, or the fact that its chip functions to verify parts of quantum mechanics. But I'm not sure how you can legitimately claim to be verifying anything; if you don't trust those laws how can you trust the phone? It would be like using a laser rangefinder to verify the speed of light. For this sort of thing the fact that your equipment functions is better evidence that the people who made it know the laws of physics, than any test you could do with it.
asr40

Various ways to measure the speed of light. Many require few modern implements. How to measure constancy of the speed of light -- the original experiment, does not require any complicated or mysterious equipment, only careful design.

The early measurements of the speed of light don't require "modern implements." They do require quite sophisticated engineering or measurement. In particular, the astronomical measurements are not easy at all. Playing the"how would I prove X to myself" game brought home to me just how hard science is. Al... (read more)

asr30

Is there an easily visible consequence of special relativity that you can see without specialized equipment?

A working GPS receiver.

In general, things like a smartphone "verify" a great deal of modern science.

Yah. Though the immediacy of the verification will vary. When I use my cell phone, I really feel it that information is being carried by radio waves that don't penetrate metal. But I never found the GPS example quite compelling; people assure me "oh yes we needed relativity to get it to work right" and of course I believe the... (read more)

asr20

Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not "enable goal-accomplishing actions" for him -- in the Bayes' world as well. Is the Cassandra's world defined by being powerless?

Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else p... (read more)

3Lumifer
So then the Cassandra's world is essentially a predetermined world where fate rules and you can't change anything. None of your choices matter.
Load More