From my “watched a YouTube video” understanding of Gödel’s Incompleteness Theorem, a consistent mathematical system cannot prove its own consistency, and any seemingly consistent system could always have a fatal contradiction that invalidates the whole system, and the only way to know would be to find the contradiction.
So if at some point our current system of math gets proven inconsistent, what happens next? Can we tweak just the inconsistent part and have everything else still be valid or would we be forced to rebuild all of math from basic logic?
It’s been consistent enough so far so most people probably won’t give a shit. It’ll be like americans ignoring the metric system.
Our current system? So ZFC?
We move to a different, probably weaker system. Certain mathematicians studying hyper-infinite monstrosities like Banach spaces will be sad. Life will go on, though.
Existing applied math should all work purely in the Von Neumann universe of finite sets. Saying there’s actually an infinite number of points in a line is a kind of luxury; it’s just one that feels right to most mathematicians. A number that’s simply much larger than is worth bothering with can work the same way. In the same spirit, you can probably get rid of most of the Von Neumann universe without breaking practical things.
If you mean actual practical math breaks, I dunno. In any reasonably complex inconsistent formal system, all statements can be proven true. Like 10 = 20. So, can I just grow more fingers somehow?
Edit: Since this is Lemmy, it’s worth pointing out that Godel’s incompleteness theorem has a kind of interchangeability with the fact some questions are undecidable by Turing machine.
Godel’s theorem has a kind of salacious, clickbait quality to it. Laymen will interpret all kinds of things into it. I’ve literally heard “so, math is useless then”. But, if you know algorithms, you know that saying they can’t determine if a loop ends is nowhere close to saying they’re going to stop working, or that they aren’t really good at what they can do.
I don’t see this happening except at the highest levels. I mean numbers are just abstractions for quantitiy and we can have visualizations easily for numbers and for addition and subtraction which are just abstrations of removing and adding to quantities. We use the visualizations a lot in early math classes. Even multiplication and division is easy to shown but is a bit more complicated. algebra is really just about moving around the unknown. all simple math could be written as equals X solve for X. You add in more unknown palce holders and it gets more complicated but I can’t see the consitancy being questions. geometry relies on postulates which is also what our abstraction of quantity was. Again those these are pretty easy to see visually and is really just sorta about defining certain properties. a point, a line, parrallel, circles, angles. I think maybe the first possible thing could be upended around here which is infinitiy. As it gets more advanced you get where someone needs to be more in math to say but calculus and trigonometry do a good job of predicting things and that seems to hold as you go higher and higher. I think that is the big thing with consistancy. If we can take the inputs when abstracting physical things and get an output that is consistant with what we see in the universe. Well then its consistent. Now the only thing here is we do have things that end up with unknowns that are sometimes then put in as constants and that becomes sort eh. So if our math is found to be inconsistant I think it would only be at some very high level and would not invalidate all of mathematics but even then I kinda doubt it. I think most problems will be with constants as placeholders or something like infinity not being represented in the real world.
It would be hugely impactful to the high levels of academic math, but I don’t think we’d see any meaningful effects elsewhere. Consistent or not, math works—it performs perfectly for finance, engineering, statistical analysis, and a finite but practically uncountable number of other things. Some abstruse inconsistency won’t suddenly break all that, and if it were discovered we would just keep on using the same “broken” math because it does the job.
Hofstadter’s “Godel Escher Bach: an Eternal Golden Braid” was on this, & mind.
Either self-consistency XOR completeness.
You can’t have both.
IF self-consistent, THEN incomplete.
IF complete ( completely-matching-universe ) THEN self-inconsistent.
.: self-consistent theory ONLY can describe a limited subset of universe.
Period.
( Hofstadter later identified, in his revised version of the book, in its preface, that his work was also on the nature of mind.
He claimed that nobody got it.
He obviously had no Buddhist friends.
When his illustrative-vignettes with the Tortoise & the Hare showed that you can present evidence to a formal-system ( represented by the Tortoise ), & it’ll say “oh, yes, of course that’s valid”, the next thing it’ll do, is demonstrate that it ignores all such evidence.
That is characteristic of ideologies, prejudices, “religions”, & formal-systems.
It’s the same “we only accept what is self-consistent with our belief, NOT what contradicts it” as formal-systems embody.
As a grotesque remapping, imagine being tasked with proving that mothering is meaningful, but you’re only permitted to use arithmetic, for proving it?
Well, you can’t, then, can you?
Mothering’s meaning ISN’T knowABLE within arithmetic: it’s in a different-kind of reality!
But people who insist that ONLY their-schema be allowed to measure “what is valid” … are all doing exactly the same scam that Hofstadter called-out in his absolutely-brilliant book, decades ago.
Anyways, Susskind has identified that “Time as a Fractal Flow” fits reality well, but he apparently hasn’t clued-in that if Time is fractal, … then isn’t Space also fractal?
The implications of Space being fractal … kinda nuke much of our whole physics-paradigm: continuum is completely-bogus, then.
Anyways, it all comes down to inapplicability:
Each model ISN’T applicable outside its applicability.
… isn’t Dunning-Kruger, or something, rooted in that?
People mistaking chess-competence for universal-competence?
Arithmetic isn’t universally-applicable.
Even something as specific as probability-calculus, with its “the difference between a tribe of 150 reduced to 149 is “IDENTICAL” with a tribe being reduced from 1 down to 0, because the ONLY difference is subtraction-of-1”…
that’s categorically wrong:
Reducing a tribe from 150 down to 149 is an ARITHMETIC difference of 1, but reducing it from 1 to 0, means the tribe’s continuity has been broken: it’s extinct now.
Using the wrong model makes one incapable of understanding true-meaning.
Based on what I earned from that book by Hofstadter, I’d say the thing is that we’re prone to applying models waaay outside of their applicability, & therefore we’re being bogus, while believing that we aren’t.
& THAT is problem.
Models are limited in their applicability.
& we are … perhaps wired … to ignore that.
_ /\ _
Similar things have already happened.
Netwon’s laws were rock solid, until we tried to explain really fast or really small things with them. Then we needed Einsteins corrections. Incidentally, we still use Newtons versions for almost everything, because Einstein’s corrections are usually a rounding error.
So if we find a huge flaw, we will immediately start using the correction where it matters, and keep using the old flawed stuff where we’re sure it doesn’t matter.
To be clear: we only continue using the old flawed stuff if it is simpler. If we found one of the formulae we use to describe some system is actually over complicating things whilst also being incorrect, we’d obviously switch in all cases
Well the last time that happened (Russels paradox) we just banned people from using sets in a manner that would cause a paradox. Soo, probably something like that
Cats and dogs living together!
Mass Hysteria!
There is an MO thread about this:
Basically “our mathematical system” for mathematicians usually (though not always) refers to so-called ZFC set theory. This is an extremely powerful theory that goes far beyond what is needed for everyday mathematics, but it straightforwardly encodes most ordinary mathematical theorems and proofs. Some people do have doubts about its consistency. Maybe some inconsistency in fact could turn up, likely in the far-out technical fringes of the theory. If that invalidates some niche areas of set theory but doesn’t affect the more conventional parts of math, then presumably the problem would get fixed up and things would keep going about like before. On the other hand, if the inconsistency went deeper and was harder to escape from, there would be considerable disruption in math.
See Henry Cohn’s answer in the MO thread for the longer take that the above paragraph is cribbed from.
I’m not going to dive in there at the moment, correct me if I’m wrong (the ‘the problem would get fixed up and things would keep going about like before’ case I suppose).
To answer OP’s question, basically the same thing that happened last time an inconsistency was found, Russel’s paradox, which, to massively simplify, was add a new axiom that says you can’t do that and carry on. (Which gave rise to the aforementioned ZFC set theory). Working math is still going to work in any case.
Yes, that’s Frege’s system mentioned in Henry Cohn’s post. But that happened in a very naive time compared with today. So it would be more of a surprise if something like that happened again.






