TIL of ≹, which "articulates a relationship where neither of the two compared entities is greater or lesser than the other, yet they aren't necessarily equal either. This nuanced distinction is essential in areas where there are different ways to compare entities that aren't strictly numerical." (https://www.mathematics-monster.com/symbols/Neither-Greater-...)
Likewise, TIL. In your link however it states
"Example 1: Numerical Context
Let's consider two real numbers, a and b. If a is neither greater than nor less than b, but they aren't explicitly equal, the relationship is ≹"
How can that be possible?
This doesn't really pass the small test for me either, but to play devils advocate:
Imagine you have 2 irrational numbers, and for some a priori reason you know they cannot be equal. You write a computer program to calculate them to arbitrary precision, but no matter how many digits you generate they are identical to that approximation. You know that there must be some point at which they diverge, with one being larger than the other, but you cannot determine when or by how much.
Maybe you will find the proof that the infinite series 0.9999... exactly equals 1 interesting:
https://en.wikipedia.org/wiki/0.999...
Wow, can't believe I've never realised this. How counterintuitive.
The 1/3 * 3 argument, I found the most intuitive.
I like the argument that observes "if you subtract 0.99(9) from 1, you get a number in which every decimal place is zero".
The geometric series proof is less fun but more straightforward.
As a fun side note, the geometric series proof will also tell you that the sum of every nonnegative power of 2 works out to -1, and this is in fact how we represent -1 in computers.
How can the sum of a bunch of positive powers powers of 2 be a negative number?
Isn't the sum of any infinite series of positive numbers infinity?
The infinite sum of powers of 2 indeed diverges in the real numbers. However, in the 2-adic numbers, it does actually equal -1.
https://en.wikipedia.org/wiki/P-adic_number
Eh, P-adic numbers basically write the digits backwards, so "-1" has very little relation to a normal -1.
-1 means that when you add 1, you get 0. And the 2-adic number …11111 has this property.
Any ring automatically gains all integers as meaningful symbols because there is exactly one ring homomorphism from Z to the ring.
In an introductory course to String Theory they tried to tell me that 1+2+3+4+... = -1/12.
There is some weird appeal to the Zeta function which implies this result and apparently even has some use in String Theory, but I cannot say I was ever convinced. I then dropped the class. (Not the only thing that I couldn't wrap my head around, though.)
The result isn't owed to the zeta function. For example, Ramanujan derived it by relating the series to the product of two infinite polynomials, (1 - x + x² - x³ + ...) × (1 - x + x² - x³ + ...). (Ok, it's the square of one infinite polynomial.)
Do that multiplication and you'll find the result is (1 - 2x + 3x² - 4x⁴ + ...). So the sum of the sequence of coefficients {1, -2, 3, -4, ...} is taken to be the square of the sum of the sequence {1, -1, 1, -1, ...} (because the polynomial associated with the first sequence is the square of the polynomial associated with the second sequence), and the sum of the all-positive sequence {1, 2, 3, 4, ...} is calculated by a simpler algebraic relationship to the half-negative sequence {1, -2, 3, -4, ...}.
The zeta function is just a piece of evidence that the derivation of the value is correct in a sense - at the point where the zeta function would be defined by the infinite sum 1 + 2 + 3 + ..., to the extent that it is possible to assign a value to the zeta function at that point, the value must be -1/12.
https://www.youtube.com/watch?v=jcKRGpMiVTw is a youtube video (Mathologer) which goes over this material fairly carefully.
\1 is a good question that deserves an answer.
\2 is "not always" ..
Consider SumOf 1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 ...
an infinite sequence of continuously decreasing numbers, the more you add the smaller the quantity added becomes.
It appears to approach but never reach some finite limit.
Unless, of course, by "Number" you mean "whole integer" | counting number, etc.
It's important to nail down those definitions.
The same argument I mentioned above, that subtracting 0.99999... from 1 will give you a number that is equal to zero, will also tell you that binary ...11111 or decimal ...999999 is equal to negative one. If you add one to the value, you will get a number that is equal to zero.
You might object that there is an infinite carry bit, but in that case you should also object that there is an infinitesimal residual when you subtract 0.9999... from 1.
It works for everything, not just -1. The infinite bit pattern ...(01)010101 is, according to the geometric series formula, equal to -1/3 [1 + 4 + 16 + 64 + ... = 1 / (1-4)]. What happens if you multiply it by 3?
You get -1.https://youtu.be/krtf-v19TJg?si=Tpa3EW88Z__wfOQy&t=75
You can 'represent' the process of summing an infinite number of positive powers of x as a formula. That formula corresponds 1:1 to the process only for -1 < x < 1. However, when you plug 2 into that formula you essentially jump past the discontinuity at x = 1 and land on a finite value of -1. This 'makes sense' and is useful in certain applications.
It's a flawed psychological argument though, because it hinges on accepting that 0.333...=1/3, for which the proof is the same as for 0.999...=1. People have less of a problem with 1/3 so they gloss over this - for some reason, nobody ever says "but there is always a ...3 missing to 1/3" or something.
The problem is that there are two different ways to write the same number in infinite decimals notation. (0.999... and 1.000...).
Thats what's counter intuitive to people, it's not an issue with 1/3. That has just one way to write it as decimals, 0.333...
Another intuition:
All the decimals that recur are fractions with a denominator of 9.
E.g. 0.1111.... is 1/9
0.7777.... is 7/9
It therefore stands to reason that 0.99999.... is 9/9, which is 1
It is false - real numbers fulfill the trichotomy property, which is precisely the lack of such a relationship: every two real number is either less than, equal or greater than.
But the numerical context can still be correct: (edit: ~~imaginary~~) complex numbers for example don’t have such a property.
i'm learning about this right here as I read, but do you mean complex numbers rather than imaginary?
Yep, that’s what I meant, sorry!
Same question.
Or more generally, vectors. They don't have a total order, because as we define "less than"/"greater than" in terms of magnitude (length), this means for any vector V (other than 0), there's an infinitely many vectors that are not equal to V, but whose length is equal to length of V.
Is this is what ≹ is talking about?
As I've written in another comment here, a great example of a number-y field which is not totally ordered is Games ⊂ Surreal Numbers ⊂ ℝ. There you have certain "numbers" which can can confused with (read incomparable) whole intervals of numbers. Games are really cool :)
You've got the subset relationships backwards: Reals are a subset of the field of Surreal Numbers which is a subset of the group of Games. (Probably better phrased as embeddings, rather than subsets, but the point remains...)
Note that Games do _not_ form a field: there is no general multiplication operation between arbitrary games.
I would imagine trying to compare a purely imaginary number (3i) to a real number (3) would suffice.
An imaginary number wouldn't obey the stated constraint of being real.
No, but if the parent's question goes beyond "how can this happen with reals" to "how can this happen with numbers in general", this answers his question.
The very next example on the page is "imagine two complex numbers with the same magnitude and different angles". For that to answer the parent's question, you'd have to assume he stopped reading immediately after seeing the part he quoted.
The question is why the page says "imagine two real numbers that aren't comparable".
I think they mean the case where a and b are variables for which you don't know the values.
Yeah, that's how I understood it. E.g one might write
To mean that in general the equality doesn't hold. Despite exceptions like a=b=0Strictly you should write something like
But shorthand and abuse of notation are hardly rareNoticed a copy and paste error too late - the != in the second expression should of course be =
No, that can be the case also for mathematical entities for which you can know the values, not just for "unknown variables".
That doesn't seem possible with the reals. An example from programming that comes to mind is NaN ≹ NaN (in some languages).
Isn't this what happens with infs (in maths and in many programming languages)?
Edit: Not in many programming languages. In IEEE-754 inf == inf. In SymPy too oo == oo, although it's a bit controversial. Feels sketchy.
The floating point specification mandates that nan does not compare to nan in any way, so it should be all languages. If you want to know nam, use isnan()
for the Reals, it is only hypothetical, the domain has a total order.
\inf and $\inf + 1$ comes to mind but I don't think it really counts
That just depends on the numeric structure you're working with. In the extended reals, +inf is equal to +inf + 1.
In a structure with more infinite values than that, it would generally be less. But they wouldn't be incomparable; nothing says "comparable values" quite like the pair "x" and "x + 1".
I guess it depends on the exact definitions, but reals usually doesn’t include the infinities. At my uni we introduced infinities precisely as an extension of the reals with two values defined by `lim`.
Hmm, what about +0 and -0?
+0 = -0
Think how any number on the z axis of complex plain isn't equal to the a number of same magnitude, on x and y axis.
Now if you really think about, a number of a given magnitude on x axis also isn't exactly "equal" to a name of same magnitude on y axis or vice versa. Other wise, -5 and 5 should be equal, because they're the same magnitude from 0.
But |5|=|-5| so I don't exactly see your point.
Edit: oh, I see what you mean. 1 is not larger or smaller than i, but it also doesn't equal i.
Probably does not apply for real numbers, but could totally apply to, e.g., fuzzy numbers, whose 'membership function' bleeds beyond the 'crisp' number into nearby numbers.
You could imagine two fuzzy numbers with the same 'crisp' number having different membership profiles, and thus not being "equal", while at the same time being definitely not less and not greater at the same time.
Having said that, this all depends on appropriate definitions for all those concepts. You could argue that having the same 'crisp' representation would make them 'equal' but not 'equivalent', if that was the definition you chose. So a lot of this comes down to how you define equality / comparisons in whichever domain you're dealing with.
It isn't. The real numbers are a totally ordered field. Any two real numbers are comparable to each other.
Perhaps this: if they represent angles, then 1 and 361 represents the same absolute orientation, but they're not the same as 361 indicates you went one full revolution to get there.
Contrived, but only thing I could think of.
That article is misinterpreting the meaning of the symbol. It isn't useful in mathematics because it is a contradiction in terms: if "neither of the two compared entities is greater or lesser than the other" then they are equal.
The author of the original article uses it correctly - think about it more in regards to importance for their example.
The business is no more or less important than the developer, but they are NOT equal.
It doesn't have to mean importance though, just the method by which you are comparing things.
Monday ≹ Wednesday
Come to think of it, it should be called the 'No better than' operator.
Not in a partial order.
For example in this simple lattice structure, where lines mark that their top end in greater than their bottom end:
11 is > to all other (by transitivity for 00), 00 is < to all other (by transitivity for 11), but 01 is not comparable with 10, it is neither lesser nor greater given the described partial order.You can actually see this kind of structure everyday: unix file permissions for example. Given a user and a file, the permissions of the user are is an element of a lattice where the top element is rwx (or 111 in binary, or 7 in decimal, which means the user has all three permissions to read, write, and execute) and the bottom element is --- (or 000, in binary, or 0 in decimal, which means the user has no permissions). All other combination of r, w, and x are possible, but not always comparable: r-x is not greater nor lesser than rw- in the permissions lattice, it's just different.
Yes, or for more familiar examples: coordinates and complex numbers. the "default" less-than and greater-than don't have any meaning for them; you have to define one, which may be "imperfect" (because one can't do better), hence the concept of partial order.
That’s only true for a total order; there are many interesting orders that do not have this property.
It holds for the usual ordering on N, Z, Q and R, but it doesn’t hold for more general partially ordered sets.
In general one has to prove that an order is total, and this is frequently non-trivial: Cantor-Schröder-Bernstein can be seen as a proof that the cardinal numbers have a total order.
Example: alphabetic ordering in most languages with diacritics. For example, "ea" < "éz", but also "éa" < "ez". That's because e and é are treated the same as far as the ordering function is concerned, but they are obviously also NOT the same glyph.
Is that really a contradiction? What about complex numbers?
That’s only true for linearly ordered structures, but isn’t true for partially ordered ones.
For example, set inclusion. Two different sets can be neither greater than not smaller than each other. Sets ordered by inclusion form a partially ordered lattice.
Reminds me of the concept of games in combinatorial game theory, they are a superset of surreal numbers (which are themselves a superset of the real numbers) in which the definition of the surreal numbers is loosened in a way which looses the property of they being totally ordered. This creates games (read weird numbers) which can be "confused with" or "fuzzy" with other numbers, the simplest example is * (star) which is confused with 0, i.e. not bigger or smaller than it, it's a fuzzy cloud around zero (notated 0║*). More complex games called switches can be confused with bigger intervals of numbers and are considered "hot". By creating numbers from switches you can create even more interesting hot games.
Heres a relevant video on the topic: https://www.youtube.com/watch?v=ZYj4NkeGPdM
I really love that video.
Exactly this video made me read more into this topic, I'm currently reading winning ways and lessons in play simultaneously. It's quite fun! I've just gotten started and am looking forward for what's left.
Thanks for sharing this. I just discovered it now and I love it too.
The space of possible abstractions for any given phenomenon is vast, yet we almost always just assume that real numbers will do the trick and then begrudgingly allow complex ones when that doesn't work. If we're not lucky we end up with the wrong tool for the job, and we haven't equipped people to continue the exploration. It's a bias with some pretty serious consequences (thanks... Newton?).
I don't think I've seen the inadequacy of number-systems-you've-heard-of demonstrated so clearly as it is done here.
Well, don't leave us hanging! What are some of your favorite hot games on top of switches?
I'm just starting to learn about all this stuff, but iirc thr game of Go is famously "hot". Also I will emphasize thats when talking about "games", usually what is meant is a game position. What spe cific game you are playing isn't too important as it can be shown that some positions in different games are equivalent.
The example z_1 ≹ z_2 for complex numbers z_1, z_2 is weird. Imo it would be clearer to state |z_1| = |z_2|, that is both complex numbers have the same absolute value.
As a PhD student in math, I have never seen it before. I do not believe that it plays any crucial role.
It sounds like the symbol "≹" just means "incomparable", which is a well-known concept in math. https://en.wikipedia.org/wiki/Comparability
This symbol for it may be useful, but it's the concept that matters.
I was so hoping I could win the day with 0 ≹ -0. But alas, 0 == -0 and 0 === -0.
NaN ≹ NaN
Reminded me of this: https://math.stackexchange.com/q/586229/361068
Apples ≹ pears.
One example would be if you define one set A to be "less than" another B if A is a subset of B. Then ∅ < {0} and {0} < {0, 1} but {0} ≹ {1}.
Such a thing is called a partial ordering and a set of values with a partial ordering is called a partially ordered set or poset (pronounced Poe-set) for short.
https://en.wikipedia.org/wiki/Partially_ordered_set
I suppose that this glyph should result from a combination of emojis for apples and oranges.
I find this concept is important in understanding causal ordering for distributed systems, for example in the context of CRDTs. For events generated on a single device, you always have a complete ordering. But if you generate events on two separate devices while offline, you can't say one came before the other, and end up with a ≹ relationship between the two. Or put differently, the events are considered concurrent.
So you can end up with a sequence "d > b > a" and "d > c > a", but "c ≹ b".
Defining how tie-breaking for those cases are deterministically performed is a big part of the problem that CRDTs solve.
The jargon from category theory for this phenomenon is - partial ordering.
It really is an interesting thing. In fact, as human beings who by nature think in terms of abstract, non-concrete units (as opposed to mathematically precise units like a computer program), we tend to compare two related things. They might belong to the same category of things, but they might not be eligible for direct comparison at all.
Once you internalize partial ordering, our brain gets a little more comfortable handling similar, yet incomparable analogies.