For sure they do not work the way the "Path Loss Equation" would have you believe they do. The path loss equation violates conservation of energy ie the frequency or wavelength term depending on how it's structured cannot be in the equation. And the receiving antenna does not have any 'gain' other than physically getting bigger or smaller, though the transmitting antenna can have gain depending on shape and size. That is, the transmitting antenna and the receiving antenna work very differently. Yes, end to end the path loss equation gives the right answer but in between it's scientifically illiterate.
Would it be possible to construct a rudimentary FM radio receiver with only the most basic parts ala Masters of the Air?
AM is quite easy (a diode and a capacitor can be enough), an FM receiver need a local oscillator that require some active elements (transistors) and a more complex circuit.
Under what circumstances is a diode and a capacitor enough to make a radio receiver?
AM peak detector is probably the easiest and primitive AM demodulator: it's basically made by a diode, a capacitor and a resistance. I implemented it when I was in the high school and I was making the first physics experiments.
The idea behind this demodulator is quite easy: the diode filters out all of the negative part of the signal, then the positive signal charge the capacitor and the energy is released in a quite constant way (R*C must be several order of magnitude higher than 1/f where f is the carrier frequency) during the negative signal "hole".
I was trying to say that a capacitor and diode (detector) is not a complete receiver.
You will need also a resistor otherwise the capacitor is not going to discharge, but the resistance is the easiest component :)
I read "AM is quite easy" as "AM demodulation is quite easy".
A Foxhole radio was often made from a coil of wire (inductor), a razor blade and pencil lead (diode):
https://en.wikipedia.org/wiki/Foxhole_radio
The aerial is connected to the grounded inductor. The coil has an internal parasitic capacitance which, along with the capacitance of the antenna forms a resonant circuit (tuned circuit) with the inductance of the coil, resonating at a specific resonant frequency. The coil has a high impedance at its resonant frequency, and passes radio signals from the antenna at that frequency along to the detector, while conducting signals at all other frequencies to ground. By varying the inductance with a sliding contact arm, a commercial crystal radio can be tuned to receive different frequencies. Most of these wartime sets did not have a sliding contact and were only built to receive one frequency, the frequency of the nearest broadcast station. The detector and earphones were connected in series across the coil, which applied the radio signal of the received radio station. The detector acted as a rectifier, allowing current to flow through it in only one direction. It rectified the oscillating radio carrier wave, extracting the audio modulation, which passed through the earphones. The earphones converted the audio signal to sound waves.
Very high impedance transducer, and very low forward voltage diode.
If you are building an AM crystal radio. [1] You will also need a high-impedance speaker [2] if you want to operate it without a power supply, otherwise you will need an amplifier. You can avoid using a commercial diode by making your own point contact diode as done in Foxhole radios [3] and you can make your own piezoelectric speaker from Rochelle salt [4]. Here [5] is one personal projects site touching all those topics.
In conclusion, you should be able to build a simple radio from copper wire, aluminium foil, a pencil, a razor blade, and baking powder.
[1] https://en.wikipedia.org/wiki/Crystal_radio
[2] https://en.wikipedia.org/wiki/Crystal_earpiece
[3] https://en.wikipedia.org/wiki/Foxhole_radio
[4] https://en.wikipedia.org/wiki/Potassium_sodium_tartrate
[5] https://rimstar.org/science_electronics_projects/index.htm#S...
By coiling wires separately to form an inductor
Yes, we did this in middle school. Was great fun :D
I think this undersells the trick behind radio.
Say we have the technology to broadcast a signal from an antenna to receivers, with some bandwidth B. Without getting clever, we can only send or receive one signal, since any others would interfere with each other.
The trick is, can we do something to shift the bandwidth B to some other base frequency F such that B + F > B? Or B + (N - 1)F > B? And if we can do that, and then downshift from B + NF back to B, it means we can broadcast to multiple channels, and receivers can tune their antennas to F and downshift the decoded signal to 0 and receive it at the original bandwidth B.
A cheap way to do this is amplitude modulation, where multiplying a signal with bandwidth B by a carrier signal of frequency F shifts it up to the range F +/- B and we can space channels apart by 2B to get however many channels our antennas allow for.
The real question is, why is it 2B and not B? Well that lies in some Fourier analysis, where the bandwidth of a signal extends into negative frequency ranges. But neverless, there is another trick, called single-side-band modulation (SSB) where we can shift a signal into the range F + B instead of F +/- B, and demodulate it into -B, B to get the original.
And that gets us to the 1950s in terms of radio technology.
The trick behind FM is to understand we can get more bandwidth by shifting the frequency response not into a series of non-overlapping channels centered at carrier frequencies like AM, but to distribute most of the information across many non overlapping bands over the entire spectrum of the antennas. To do this we don't modulate the amplitude of the carrier, but its frequency. This makes it possible to distribute far more bandwidth across a wide range of frequencies, and it's how FM radio works today.
These concepts create the foundation for modern radio communication, We can modulate data signals to different bandwidths and receive them, provided we know where to tune to. And these bandwidths can either be continuous chunks of spectrum (AM), or interleaved (FM). The next step is to think in terms of time, which is to say that we can have receivers negotiate not only which ranges in frequency they care about, but which time frames they want to listen before waiting for their next time slot.
For those interested in the theory, the fundamental problem is that we can design antennas that can transmit or receive at some fixed maximum bandwidth, bounded by physics. The engineering problem is to find out how to share that bandwidth to maximize the number of receivers and/or senders by sharing the same bandwidth. Amplitude modulation is excellent, but it divides the bandwidth up into a fixed number of channels of maximum individual bandwidth. FM is a bit more efficient in how it can allow many broadcasters to even more receivers choose which channels they receive. But for modern communications, where we need high bandwidth for distinct transmitter/receiver connections, we need protocols to figure out how to share the bandwidth over the air and the two tricks are to divide that bandwidth by frequency (like AM and FM) or time (sharing the same frequency channels, but only picking the frames that we care about), or both.
divide that bandwidth by frequency (like AM and FM) or time
Ah. The real magic is when we separate by space (beyond just frequency or time). The ability to do this was discovered relatively recently, in 1996, by a guy called Foschini, though radio astronomers will say "Meh". By adding multiple antennas and doing space-time coding engineers found they could pump an order of magnitude more data through a radio channel. The maths involved is high school level (linear simultaneous equations), and it's magic to understand Foschini's work and think "Why didn't we do that before?"
The other bit of radio magic is error control coding. This is the stuff that lets us reliably talk to Voyagers I and II.
Fascinating how we keep being inspired by fundamental physics and astronomy to keep cramming mode information in our channels. I'm still trying to understand Orbital Angular Momentum multiplexing https://en.m.wikipedia.org/wiki/Orbital_angular_momentum_mul...
I'd agree with the Wikipedia article, that it sounds like MIMO, in that it requires the beam to have a spatial extent.
From the Wikipedia article:
can thus access a potentially unbounded set of states
That's what people originally thought about MIMO. MIMO's not unbounded. The limit to the number of states is related to the surface area of the volume enclosing the antenna, with the unit of distance being the wavelength. A result radio astronomers already knew when the comms people derived it. With absolutely no evidence to back it up, I'd guess that the same limit applies to OAM multiplexing.
As an aside, when one expresses physics in terms of information theory my understanding is that the maximum the number of bits that can be stored in a volume of space (also the number of bits requited to completely describe that volume of space) is related to the surface area of the volume with the linear unit being Plank lengths. Is MIMO capacity in some way a fundamental limit in communications?
[1] https://physics.stackexchange.com/questions/497475/can-anyon...
And for those even deeper into the theory, one question you might ask is, if we can divide spectrum and time to get some bandwidth B per channel, how many bits can we send/receive over a distinct channel?
The answer is C = Blog2(1 + S/N) where B is the bandwidth and S/N is the signal to noise ratio determined by the environment (how much noise is present relative to the signal being transmitted). The crazy thing is this was proven in the 1940s and everyone interested should go read The Mathematical Theory of Communication by Claude Shannon. This is referred to as the Shannon-Hartley theorem, and it determines the channel capacity (C, in bits/second) of any communication channel in the presence of noise.
The math concepts might seem heady, but it's actually fairly approachable and available online. It's fascinating that the fundamentals were proven out in one work nearly 80 years ago by a handful of people, and the math is not that bad.
The thing that makes this nuts is that if an engineer picks some target bitrate for a device, say a cellphone watching video, they can work backwards to determine the channel capacity they need, do some experiments to figure out noise, and then determine what the target their modem protocol needs to reach to be suitable. And this is how we get 5G and fiber or whatever comes next.
Shannon was pretty ridiculous. He basically invented information theory, proved all the major theorems involved, and applied it to communications and error-correction codes. If you work in RF you can't do much without encountering his work. (It did take a while before anyone figured out how to get close in practice to the limits he proved, though)
And before his work on information theory, his master's thesis showed that Boolean algebra could be used to design digital circuits and invented logic gates:
https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_a...
https://spectrum.ieee.org/claude-shannon-information-theory
One of the all time greats.
This is an excellent article, thank you for submitting it! I love how effortlessly this article delivered an intuition for why an ideal antenna length would be half of the wavelength of the signal you want to receive. I was also delighted by the point about how all methods of modulating a wave can be recontextualized as frequency modulation!
I was also delighted by the point about how all methods of modulating a wave can be recontextualized as frequency modulation!
That's the classic way to think about it. Another way is to view the input as simply a sequence of voltage readings. Extracting a useful signal from that is an exercise in exploiting redundancy in noisy data. [1] Software defined receivers work that way.
Analog radio (AM, FM, etc.) is a hulking big carrier weakly modulated by the signal. Analog TV, which was AM video with FM audio, had 80% of the power in the carrier. Analog UHF TV stations often had multi-megawatt transmitters to overpower noise by sheer RF output. Digital broadcast TV transmitters output maybe 150KW, because the modulation is more efficient.
Modern modulation techniques are insanely efficient. It's amazing that mobile phones work.
[1] https://ocw.mit.edu/courses/6-450-principles-of-digital-comm...
I may not be understanding what part of the operation you are talking about. but radiated power? no transmitter had megawatts of radiated power. 50Kw for a fm broadcast antenna is a common number passed around. The huge "voice of america" shortwave station was 300Kw.
I have heard of military radars having megawatts of radiated power. but even then it was in the low megawatts.
For UHF television stations, the effective radiated power (ERP) is typically 1 Megawatt. That is accomplished (for example) with a 57 kilowatt transmitter and an antenna with 12.44 dB gain.
The goat testicle doctor did get permission for running his border blaster at a million watts.
https://en.wikipedia.org/wiki/John_R._Brinkley#Brinkley_and_...
""…all methods of modulating a wave can be recontextualized as frequency modulation!"
That's the classic way to think about it. Another way is to view the input as simply a sequence of voltage readings."
Right. And modulation of any type produces sidebands as per Fourier! Do anything whatsoever to disturb a pure sine wave then math and physics dictates it so.
I'm intrigued by things like this that used to be high technology but now are mature and pushed way down into the infrastructure. No one is going to make much money being really good at radio, any more than they will be really good at machining steel, but it's still necessary for higher levels of the tech stack to function.
The US (and Chinese, and Russian, and European...) government spends billions a year on companies that are good at radio. Radar, satellite communications, 5G, etc, etc. are all critical parts of modern technology stacks, that are "high technology", and key for forward innovation. If you think it's a solved problem, why doesn't every telecom company have nationwide 5G deployed yet?
There is A LOT of money to be made in the space, if you're good.
But, it's not AdTech, so HN isn't familiar with the field I guess :^)
"No one is going to make much money being really good at radio, any more than they will be really good at machining steel,"
How do you know? For instance, I'd suggest that not every method of modulation has been invented or even yet implemented. Also, we've hardly begun to design and implement meta materials into antennae and RF filters—the field's still wide open for innovation and invention.
And new methods of 'machining' steel have recently been invented and are just coming into use (if I owned the patents I'd be sitting pretty for life).
I promise people still make piles of money being really good at radio and really good at machining steel. The complexity of the deliverables has increased, yes, but the expertise and technical skill to do modern radio and machining is very much rewarded in the marketplace.
Their primer article [1] is also really nice.
Today, I’d like to close this gap with a couple of crisp definitions that stay clear of flawed hydraulic analogies, but also don’t get bogged down by differential equations or complex number algebra.
Related: many, many years ago, when Facebook didn't exist yet, Google still passed as a "good" company, and hobbyist electronic geeks had almost only PICs to choose from, I found online a very long and complete electronic course that went from 0 to basic R/C concepts, to transistors, up to pretty advanced topics like magnets/transformers and IIRC radio too.
It was made of pretty raw HTML pages and images, and what was most peculiar about it was that it managed to explain a lot of concepts up to an applicable level (as in, actually designing analog circuits) without (any?) calculus at all.
Some of those may be false memories, but if I remember correctly:
* Its HTML style had a yellowy background * It was taken from an old-ish (US?) navy electric engineer-focused applied electronics course for training naval engineers. * It was more focused on analog circuits
I remember I downloaded it all but after all those years who knows where it could be. Maybe in some 1GB disk of my first Pentium PC, so it's basically lost.
Does anyone in HN knows what I'm talking about? I was never able to find it again.
[1] https://lcamtuf.substack.com/p/primer-core-concepts-in-elect...
I guess the original "NEETS" content is this:
http://compatt.com/Tutorials/NEETS/NEETS.html
Content updated in 2011.
THAT'S IT!!!
Thank you so much! I've been looking for this for at least fifteen years!
And there are even links to previous HTML versions (this one is PDF)... amazing!
flawed hydraulic analogies
I want to say that’s cool, avoid common pitfalls in explanations, but I want to to point out that all analogies fall short, otherwise they would be the same thing, and not an analogy.
That is, if the hydraulic analogy were perfect, then that would mean that electronics would just behave as a fluid and we could teach it an a part of fluid dynamics.
But instead it is an analogy, electronics is not a part of fluid dynamics, there’s just a few similarities that can be used for teaching.
It’s not unusual to teach an imperfect simplistic model at first that you intend to supplement later with more details that break the analogy.
A perfectly uniform waveform is still not useful for communications...
It is if you encode information by switching it on and off in standard patterns. These uniform waveforms--or "continuous wave" (CW)--allow very simple devices with very little RF power to be used to communicate with Morse Code.
One could argue that technically it's no longer uniform if it's switched on and off, though.
It is no longer uniform. It's counter-intuitive (unless you've really internalised the Fourier transform and/or the Shannon-Hartley theorem) but a pure sine wave stops being a pure sine wave if you key it on and off and occupies progressively more bandwidth as the keying rate increases.
An even less intuitive result is that you can decode a signal that is weaker than the noise floor if the data rate is sufficiently low and/or the bandwidth is sufficiently high. This has practical applications in amateur modes like JT65, ultra-wideband communications and even GPS.
You can see it happening in! [1] is a waterfall display (time is vertical axis, frequency is horizontal) of a few CW signals and compare the harsh braodband clicks on the right to the nice dotted lines on the left. That kind of broadband noise happens when your signal goes from on to off too fast (or something else like just not generating a clean sine wave). If your radio can shape your keying to have a little ramp-up/ramp-down you get a much cleaner looking signal like those on the left.
The noise is effectively AM, since you are modulating the signal from 0 to full amplitude, and with the very fast amplitude change you get what looks like characteristic AM signal with a center carrier and symmetric sidebands.
Also, I’m not sure if people are aware of the number of radio systems that enable their smartphones.
NFC (eg. Apple Pay) is a radio, range a few cm. Bluetooth is a radio, a few meters. WiFi is several radio systems, range tens of meters. Cell phone is several radio systems, range up to kilometers. GPS (and rival systems) range up to thousands of kilometers.
NFC is not really a radio. Basically it uses a loosely copled transformer. Works much closer than 1 wavelength and only magnetic field matters.
100%. Every time I read the term "antenna" when referring to the coil used for NFC/RFID I suffer inside...
...and yet, efficiently transferring a 1kb file between two physically adjacent smartphones remains an apparently unsolved problem.
Magnets
Came here to see this. Thank you.
Same here, glad I'm not the only one.
I work RF world pretty regularly, and I still consider the Superheterodyne Receiver to be tantamount to magic.
Edwin Armstrong was a brilliant brilliant man.
Not that it matters much, but it seems to be somewhat unclear who came up with the idea for the superheterodyne receiver first. Could be Armstrong, or Lévy, or even Schottky. The patent in the US was eventually awarded to Lévy.
Armstrong definitely was a genius though. Before the superheterodyne receiver he also invented the regenerative receiver.
And you're right, the superheterodyne is such a marvelous technology. The principles it's based on aren't super complex in itself, but the combination of them is genius.
https://en.wikipedia.org/wiki/Regenerative_circuit https://en.wikipedia.org/wiki/Superheterodyne_receiver
Ha, not magic but conceptually the superhetrodyne is an absolutely brilliant design and it's still not lost is 'magic' even after a hundred years, and likely never will despite newer digital concepts (they being more complex to implement).
"Edwin Armstrong was a brilliant brilliant man."
Right! ...And as you'd likely know, Armstrong's tormentor and nemesis was an arrogant, despicable bastard of the first order!
(Believe it or not, but decades ago I worked in a prototype lab at RCA and actually met David Sarnoff albeit briefly. That never changed my opinion of him.)
Antennas have always been black magic for me, and this article blew my mind with the "capacitor you pull apart". Thank you for posting this article, this is fantastic.
Not an RF engineer. But I mess with radio's for a living.
Most of the time when people try to explain antenna's they start talking resonance. Which really describes a 'good' antenna.
What an antenna does is create a alternating magnetic field with a alternating electric field 90 degress out of phase with each other. Blah blah blah quantum electrodynamics blah blah blah radiates photons.
Resonance means the antenna stores energy as resonance. That increases the electric and magnetic fields making the antenna radiate more efficiently. Some antenna's are very wide band and 'flat' and used for cough cough military cough cough and other applications.
In today’s article, I’m hoping to provide an introduction to radio that’s free of ham jargon and advanced math.
Sounds great! Let’s dig in.
… the fundamental mirroring behavior is still present, but it’s usually managed pretty well. Accidental mirror images of unrelated transmissions can be mitigated choosing the IF wisely, by designing the antenna to have a narrow frequency response, or by putting an RF lowpass filter in front of the mixer if needs be
Mission failed. Ah well.
Not really unless you refer to the use of 'IF' and 'RF'. Maybe it would have been better if they wrote these out as 'IF (intermediate frequency)' and 'RF (radio frequency)' with a link to explain in which context IF is used but for the rest that sentence looks OK to me.
Tim Hunkin has posted a remastered version of his "The Secret Life of the Radio" TV program (from 1987) which recreates some of Hertz and Marconi's experiments with spark gaps and coherers.
I can't recommend this entire series enough, Hunkin's work is a masterpiece.
I think it is also worth mentioning the role of the ionosphere - which is the (charged) part of the atmosphere that will reflect radio/EM waves, and make it possible to communicate with someone on the other side of the globe. The ionosphere has different layers, and is quite dynamic - depending on the sun and its activity.
Basically, imagine a charged shell around the earth that reflects electromagnetic waves back, and that the properties of said shell is constantly fluctuating. Solar storms (and following northern lights) are bad news for radio communication.
That's the very, very ELI5 version.
What's worth mentioning is that shortwave offers much lower transfer latency than optic fibre so it's possible to establish faster cross-continental communication over radio than trans-oceanic optic fibre cables.
This has potential to be an interactive topic like one of https://ciechanow.ski 's topics.
This reminds me of a really cool video on superheterodyne receivers that Technology Connections did. https://www.youtube.com/watch?v=hz_mMLhUinw
An old and very accessible classic for the "general audience" to understand the theory behind "Radio Science" is Jim Sinclair's How Radio Signals Work All the Basics plus where to find out more.
I truly enjoyed the article. When I played the vimeo video of ½ λ dipole antenna electric field propagation I reached for my headphones hoping to hear dark side of the moon. No dice. I get antennas and their physical characteristics and I am always intimidated by the math behind digital signal processing (DSP). Again, great article.
Another trick, which I haven't really appreciated for a long time, is that it's VERY dark in the radio frequencies. Black bodies radiate barely any energy there. It's quiet so if you shout even moderately loudly you can be heard halfway across the globe. It's permanently night and even small lamp shines quite far.
"Radio communications play a key role in modern electronics, but to a hobbyist, the underlying theory is hard to parse."
I don't believe radiocommunications and the electronics of radio is hard to understand—at least that's so at a level where a hobbyist can gain enjoyment from the subject.
I say that as someone who obtained a radio amateur's license in junior highschool at age 15.
Yes, radio engineering and its physics does get very complicated at the high end, and for a good understanding one requires advanced math including partial differential equations such Maxwell's equations and their SR/Special Relativity extensions, and beyond that one needs to understand the physics of electrodynamics and that requires knowledge of quantum mechanics including QFT (Quantum Field Theory), which is top-echalon physics and close to as complex as physics gets.
However, the hobbyist doesn't need to know an iota of that advanced complex stuff to enjoy radio as a hobby. Absolutely none of it.
All that he/she needs to know are very basic principles such as how antennas receive and radiate signals, how radio signals are amplified and detected, and later on how signals are mixed, multiplied and hetrodyned, and how radio transmitters and receivers work—even the principles behind how the common superhetrodyne receiver works is pretty standard knowledge for a radio hobbyist.
Back when I was learning about radio I doubt very much if an article would have been written in the tone of this story, especially so one that implied that to understand the subject could be difficult even at a hobby level. Why, you may ask? Well back then, if anyone had a hobby interest in electricity and electronics then essentially the only outlet for their interest was radio and perhaps television, as the other branches of electronics would not have been as readily accessible to hobbyists.
Nowadays, that's changed, there's much more to keep a hobbyist's interests such as programming, computers, computer games, and other electronics not based on radio technology—digital electronics for instance, so knowledge about radio tech and radiocommunications theory have become much less commonplace having been diluted amongst all these competing interests. Obviously, the knowledge is still out there but it's more widely dissipated and not as easily accessible in the practical sense, especially so for hobbyists of a young age.
When radio was essentially all that there was around there were many more elementary books on radio available for younger readers and these increased in complexity as the hobbyist gained practical experience. For instance, when I first became interested in radio my first introduction to the subject—like most others—was building crystal set radios, and from there we advanced to incorporating tubes and transistors into our more advanced designs. For beginners, hands-on practical books such as how to build crystal sets which included many different designs were commonly available.
(Back then, a well known author of books on crystal sets and basic radio was Bernard B. Babani, an unforgettable name if ever there was one. His books are still available but you'd never know to look for them unless told about them.)
Today, many have never heard of crystal sets let alone their 'cats' whisker' detectors, so when they become interested in the subject they're thrown in at the deep end. And not having the basics already under their belts, the more advanced radio theory comes as a bit of a shock.
I went to my friends eecs graduation a long time ago at ucla and the founder of Qualcomm talked about how what drove him to get his phd was his curiosity and determination to understand truly how radios worked.
He said that he got his phd because that’s pretty much how long it took him before he felt like he really understood how his radio worked, and even then sometimes wasn’t sure.
Was a good speech that this article reminded me of.
If anyone enjoyed this article, then i'd recommend reading this one as well[1], it's an interesting article with a focus on the relationship between radio and probabilistic reasoning in the early 1900s. https://www.argmin.net/p/the-spirit-of-radio
Why is that?
It's because energy created by the transmitter must degrade as one over R squared in the far field. The frequency (or wavelength, have your pick) has nothing to do with the energy transmitted because energy must be conserved. Putting in the frequency term then violates conservation of energy between the antennas. Then, at the receiving antenna the error of conservation of energy is then patched up by assigning a bogus 'gain' at the receiver. The transmitter and receiver are asymmetric but the path loss equation pretends that they are because that's easier for most people to understand and it works out 'end to end'.
Absolutely I agree that the geometry of the problem dictates 1/R^2 dependence, regardless of frequency. The gain, which I agree is a misleading way to think about the area, is related to the area of receive through the frequency terms. If you don't like that form of the path loss equation, I understand (I don't either!), but physics is not broken.
Where the "bogus" gain really shines, though: I can take my original receive antenna, operate it as a transmitter (so gain is now relevant), receive with my original transmit antenna (where I now care about area) and get the exact same result in terms of loss!
The formula on wiki has a distance squared term in the denominator tho?
Short answer: it doesn't, though I understand why it's misleading. Read my response above.
How can an equation that does not represent a balance of energy violate energy conservation?
With path loss equation I assume you refer to Friis equation which is just the ratio of power received at an antenna to power given to the transmitter. It is correct and does not violate conservation of energy since it says nothing about the power not received at the receiver
What they're saying is that the geometrical interpretation of an outwardly expanding spherical shell of power shouldn't depend on frequency. In this respect they are correct and they have a good intuition for the problem.
Now here's the catch: If the receive area were not changing as a function of frequency when the receive antenna gain is kept constant (it does), this would break physics (it doesn't). However, the effective area of an antenna with fixed gain varies as 1/lambda^2. In effect the geometric interpretation is still correct, but the variation of antenna area with gain resolves the seeming paradox and saves physics.
I think nobody says that is does. I believe the problem is to call Friis transmission equation "Free-space loss". Actually the Friis formula is composed of 3 terms: the receiving and transmitting antennas gain and the actual free space loss which has the 1/R^2 dependency (which actually isn't a "loss" in energy balance terms, since it's not lost energy, just energy not received at a certain point, so we could argue about that term too...)
Yep! Fully agreed with all your points, I was just trying to get at the original poster's line of thinking.
Transmitting and receiving antennas work the same way. Flip the sign of time in Maxwell’s equations, and radio waves will run perfectly backwards.
Yes and no. I emphatically agree that the way the Path Loss Equation (Friis) is taught is misleading. I much prefer the way you interpret it, with the transmit antenna represented with gain and the receiving antenna having only an effective receive area. It's much more intuitive because I can visualize a spherical shell of power radiating outward.
That said, a receive antenna does absolutely have "gain", which is evident by the antenna receiving a stronger or weaker signal depending on its orientation with respect to the transmit antenna. The key is this: for an arbitrary antenna, the (transmit, if you like) gain has a one-to-one relationship to the "effective receive area" at a given frequency, so talking about area and gain are equivalent, if not intuitive. We usually assume for point-to-point links that the antennas are oriented at each other, and in such cases (for good aperture antennas), you are absolutely right that the physical area and effective area are approximately equal. For ideal wire antennas, however, the physical area of the antenna is 0, but the effective area is nonzero (because of magic).
Now, I disagree that the path loss equation violates conservation of energy. The link to the effective area and gain depends on the wavelength. When I increase the frequency of operation but I keep the gain of the antennas constant, the areas decrease, so my receive antenna is physically smaller and the power goes down. Not breaking physics. A lot of people will say "path loss gets worse as you go up in frequency", and this is extremely misleading if not "scientifically illiterate" as you pointed out. Sure, there are molecular absorption bands from oxygen/water that literally dissipate power in the atmosphere, but generally speaking, the path loss didn't get worse, your receive antenna just got smaller.
Now wait a minute, what if I just made my receive antenna larger? Well, you can do that! The problem is that because gain and area are linked, efficiently receiving power in a given LARGE area (with respect to the wavelength) implies high gain. High gain implies a very narrow beam (more like a laser pointer than a normal dipole spilling energy everywhere). So it becomes really important that I "point" my receive antenna perfectly at the transmitter. Satellite dishes are really big, and they absolutely have to be pointed accurately at the satellite.
"And the receiving antenna does not have any 'gain' other than physically getting bigger or smaller..."
Well, it depends on one's definition of gain! If you were to say to the designers of the ELT (the Extremely Large Telescope) that it had no gain over isotropic then they'd fall about laughing (remember, its method of operation also relies on collecting and concentrating incoming EM radiation as do RF antennae). An antenna's effective gathering aperture and directivity for both RX and TX is just about everything, and the coupling efficiency from the antenna to the feeder and RX/detector, and vice versa for the TX just about covers the rest.
"...though the transmitting antenna can have gain depending on shape and size."
Uh? How? What's the difference? Physics says the law of reciprocity applies, a good transmitting antenna also makes just as good a receiving antenna. The only proviso being that a transmitting antenna has to be designed to withstand high RF power levels (even then, this only applies to TX power levels where I²R losses can cause enough heating to damage the antenna and feed lines, similarly, high power TX levels can lead to very high voltages which can arc over; TX antennae are designed to handle this.)
I used to work with microwave transmitters and receivers and my microwave dishes and other types of antennae were directly interchangeable—in fact, they were identical.
Re the Path Loss Equation, it works in the practical sense and is used everywhere. Fighting over technicalities here is akin to arguing the difference between laws of motion under Newton and when they're subject to the rules of Einstein's Relativity. It's damn obvious when one's applicable and the other is not.