In the Netherlands alone, these solar panels generate a power output equivalent to at least 25 medium sized nuclear power plants.
Since this didn't pass the smell test: the author is looking at nameplate capacity, which is a completely useless metric for variable electricity production sources (a solar panel in my sunless basement has the same nameplate capacity as the same panel installed in the Sahara desert).
Looking at actual yearly energy generation data, this is more like 1.5 times the generation of an average nuclear power plant (NL solar production in 2023: 21TWh, US nuclear production in 2021: 778TWh by 54 plants).
Which maybe puts more into perspective the actual risks involved here. I'm not saying there shouldn't be more regulations and significantly better security practices, but otoh you could likely drive a big truck into the right power poles and cause a similar sized outage.
For the purposes of information security, the nameplate capacity is the correct number to consider for a very simple reason: we must defend as if hackers will pick the absolute worst moment to attack the grid. That is the moment when the sun is shining and it's absolutely cloudless across Netherlands, California, Germany, or wherever their target grid is.
At that moment, the attacker will not only blast the grid with the full output of the solar panels, but they will also put any attached batteries into full discharge mode as well, bypassing any safeties built into the firmware with new firmware. We must consider the worst case, which is that the attacker is trying to not only physically break the inverters, but the batteries, solar panels, blow fuses, and burn out substations. (Consider that if the inverters burn out and start fires, that's a feature for the attacker rather than a bug!)
So yes, not only is it 25 medium sized nuclear power plants, it's probably much higher than that! And worse, that number is growing exponentially with each year of the renewable transition.
This was probably the scariest security expose in a long time. It's much much worse than some zero-day for iphones.
A bad iPhone bug might kill a few people who can't call emergency services, and cause a couple billion of diffuse economic damage across the world. This set of bugs might kill tens of thousands by blowing up substations and causing outages at thousands to millions of homes, businesses, and factories during a heat wave. And the economic damage will not only be much higher, it will be concentrated.
This is wildly overstating the issue. Hackers are not going to break into hundreds of separate sites, compromise inverters, compromise relay protection, compromise SCADA systems, and execute a perfectly timed attack. Even if they did, these are distributed resources, they don't all go through a single substation and I doubt any one site could cause any major harm to any one substation.
Instead, they're going to get a few guys with guns and shoot some step of transformers and drive away.
The problem with infosec people is they tend to wildly overestimate cyber attack potential and wildly underestimate the equivalent of the 5 dollar wrench attack.
This isn't hundreds of separate sites that have to be hacked individually. This is fewer than 10 clouds with no security to speak of and the ability to push evil firmware to millions of inverters worldwide, where in a few years at the current rate of manufacturing growth, it will be 10s, and then 100s of millions of inverters.
Yeah, the potato cannon filled with aluminum chaff or medium caliber semi-automatic rifle can take down a substation. But this is millions of homes and businesses, which can all have an evil firmware that triggers within seconds of each other. (There will inevitably be some internal clocks that are off by days/months/years, so it's not like it will happen without warning, but noticing the warning might be difficult.)
And the growth in sales is exponential!
Technically, anything that can put a hole in an oil-filled transformer. https://en.m.wikipedia.org/wiki/Transformer_types#Liquid-coo...
You don't need to break it... just crack the radiator enough for all the circulating fluid to drain, then it overheats.
Any transformer over about 5 MVA will probably be equipped with a low oil level switch that de-energizes it
If all you wantes was to kill the power I don't see the difference...
Sure the repair is easier/quicker but the economic damage was already done...
Important to point out this isn't just theory, it's actually happened (in the SF Bay Area!) with a regular rifle.
https://en.wikipedia.org/wiki/Metcalf_sniper_attack
https://www.npr.org/sections/thetwo-way/2014/02/05/272015606...
Also in the north GA mountains in the 1970s.
Most(more or less all of them) grid operators can operate their network remotely from a single control room.
I suspect most grids are extremely easy to hack(never tried, don't bite the hand that feed you etc).
Info sec is just a hobby of mine. I install high voltage switch gear for a living.
I’d expect the opposite. All companies controlling equipment that is part of the “Bulk Electric System” have to be NERC CIP compliant and are audited regularly with large fines for non-compliance. Doesn't guarantee perfect (or even good security) but it’s more likely to be a priority.
How do fines make things better? They confiscate resources that could be used to improve.
The management at the utility doesn’t want to be recognized for being a deficient operator that doesn’t meet standards, so they hire employees to ensure they are compliant
A fine is a black eye for a utility where people pride themselves on the reliability of the service they provide
A lot of utilities have their own fibre since they own poles/towers and need it for tele protection anyway so they can have secure a real private network between control room and significant power plants
Hurray! I have experience that may shed some insights. I worked on SCADA software (3 different ones), for about 15 years, started off as a Systems Engineer for an Industrial Power Metering company (but writing software), built drivers for various circuit breakers and other power protection devices, and wrote drivers and other software for IEC61850 (substation modelling and connectivity standard). I’ve been the technical director of one of these SCADA systems, and in charge of bringing the security to “zero trust”. I’ve been on the phone with the FBI (despite not being an American or in America), and these days I design and lead the security development at a large software company.
I’ve been out of the Power Industry/SCADA game for about 6 years now, and never had huge involvement with solar farms, so please take this with a large grain of salt, but here is my take. 15 years ago, all anyone would say about industrial networks was “air gap!”. Security within SCADA products was designed solely to prevent bad operators from doing bad things. Security on devices was essentially non-existent, and firmware could often be updated via the same connectivity that the SCADA system had to the devices (although SCADA rarely supported this; it was still possible). In addition, SCADA systems completely trusted communication coming back from the devices themselves, making it relatively simple for a rogue device to exploit a buffer overrun in the SCADA. After Stuxnet + a significant push from the US government, SCADA systems moved from “defensive boundary, trust internally” to “zero trust”. However, devices have a long, long service life. Typically they would be deployed and left alone for 10+ years, and generally had little to no security. Security researches left this space alone, because the cost of entry was too high, but anytime they did investigate a device, there were always trivial exploits.
Although SCADA (and other industrial control software), will be run on an isolated network, it will still be bridged in multiple places. This is in order to get data out of the system, but also to get data into the system (via devices, and off-site optimisation software). The other trend that happened over time was to centralize operations in order to have fewer operators controlling multiple sites. That means that compromising one network gives you access to a lot of real world hardware.
Engineers never trusted SCADA (wisely), and all of these systems would be well built with multiple fail-safes and redundancies and so on. However, if I were to be a state-actor, I’d target the SCADA. If you compromise that system, you have direct access to all devices and can potentially modify the firmware to do whatever you want. If there is security, the SCADA will be authorized.
I don’t think the security risks are overblown (they are overblown in what they think the real problems are). I think that as the systems have gotten ever more complex; we have such complicated interdependencies that it is impossible to deterministically analyse them completely. The “Northeast blackout of 2003” (where a SCADA bug lead to a cascade failure), was used as a case-study in this industry for many years, but if anything, I think the potential for intentional destruction is much higher.
I’m in this space, but plc io networks from Schneider and Rockwell are still “trust internally”, and some HMI or scada has to have read/write to them. At least Rockwell you could specify what variables were externally writeable whereas Schneider was essentially DMA from the network.
They don't need to break into separate sites though - the issue at hand is that a single failure in the centralised "control plane" from the vendor (i.e. the API server that talks to consumers' apps) can be incredibly vulnerable.
Here's a recent example where a 512-bit RSA signing key was being used to sign JWTs, allowing a "master" JWT to be signed and minted, giving control of every system on that vendor's control system.
https://rya.nc/vpp-hack.html
The failure mode is much simpler: you don't need to physically break anything, you just need to drop 10GW of production from the grid (send a "turn off" command to all solar inverters) leading to a cascade of failures. Getting the grid back online is a laboreous manual process which will take (a lot of) time. Think https://en.wikipedia.org/wiki/Northeast_blackout_of_2003 or https://en.wikipedia.org/wiki/2021_Texas_power_crisis .
It would be even more laborious and take more time to bring things back online if the attacker manages to damage or destroy equipment with an overload like the GP describes.
The "turning the grid up to 11" attack isn't really possible. I know it seems like it is, but the inverters will only advance frequency so much before they back off, the inverters will only increase voltage so much. Etc. Sounds scary, isn't practical.
Turning everything off when the panels are at peak output? That lets frequency sag enough that plants start tripping offline to protect themselves and the grid and it'll cascade across the continent in just a few minutes. Then you have a black start which might take months.
There's an excellent video on how catastrophic a black start is. https://youtu.be/uOSnQM1Zu4w?si=x0dA7X7-19CJm6Kf
Months isn’t correct. Unless there was damage it could be recovered within a day.
Would love to know more about this. How would that happen? What's the process to bring it back up so fast?
The video has a lot of good info and seems compelling. During the Texas freeze many power company officials said the exact same thing, if the Texas grid went down it would have taken weeks to bring everything back online.
It's called black start (https://en.wikipedia.org/wiki/Black_start), and power companies plan for it, and the necessary components are regularly tested. It's not a fast process, it can take many hours to bring most of the grid back up. We had last year a large-scale blackout here in Brazil, and a area larger than Texas lost power; most of it was back in less than a day.
The trick word here is "everything". Every time there's a large-scale blackout, there's some small parts of the grid which fail to come back and need repairs. What actually matters is how long it takes for most of the grid to come back online.
Inverters may be protected against changing settings, but if you can replace the firmware it can likely cause permanent hardware damage. Which the manufacturer, perhaps under pressure from its government, can do.
That doesn’t necessarily lead to a failure. In Texas a nuclear plant went down. About 1.2 (or 4 times that) GW (https://www.keranews.org/news/2023-06-22/with-temperatures-s...).
The grid stayed online. Likely thanks to grid batteries, see aforementioned link.
The risk is not turning all solar installations "on maximum". That happens nearly every summer day between 1 and 2pm. Automatic shutoff when the grid voltage is rising can be disabled, but more than 9 out of 10 consumer solar installations in the Netherlands deliver their maximum output on such a day for most of the summer, not running into the maximum voltage protections.
The big risk is turning them all off at the same time, while under maximum load. That will cause a brown-out that no other power generator can pick up that quickly. If the grid frequency drops far enough big parts of the grid will disconnect and cause blackouts to industry or whole areas.
It will take a lot of time to recover from that situation. Especially if it's done to the neighbouring grids as well so they can't step in to pick up some of the load.
Not if we have grid scale batteries. Solar shuts off, oh no. Sometime in the next four hours we need to get that fixed or something else up. Also flattens out the demand curve and allows arbitrage between the peak and valley.
Problem is, those batteries are not there (yet)...
Don’t underestimate exponentials. Tesla produced 6.5 GWh of battery storage in 2022, 14.7 GWh in 2023, and will probably double again in 2024.
And other battery manufacturers such as BYD grow fast too.
Always underestimate exponentials: none exist in nature, they're just an early phase of an S curve (sigmoid, if you want the $10 word)
Which is kind of normal, we don't need infinite batteries ;-)
Power transformers have a loooooooot of thermal wiggle room before they fail in such a way and usually have non-computerized triggers for associated breakers, and (at least if done to code, which is not a given I'll admit) so do inverters and every other part. If you try to burn them out, the fuses will fail physically before they'll be a fire hazard.
This is true, especially for low frequency (high mass) inverters. The inverters that are covered here are overwhelmingly high frequency (low mass) inverters. We hope that they practiced great electrical engineering and layered multiple layers of physical safeguards on top of the software based controls built into the firmware.
Of course a company that skimped to the point of total neglect on software security would never skimp anywhere else, right? Right?
:crossed-fingers: <- This is what we are relying on here.
And even if they did all the right things with their physical safety, the attackers can still brick the inverters with bad firmware and make them require a high skill firmware restore at a minimum and turn them into e-waste and require an re-install from a licensed electrician at a maximum.
At least in Europe, product safety organizations and regulatory agencies have taken up work to identify issues with stuff violating electrical codes (e.g. [1] [2]) and getting it recalled/pulled off the market.
Sadly there is no equivalent on the software side - it's easy enough to verify if a product meets electrical codes, but almost impossible to check firmware even if you have the full source code.
[1] https://www.bundesnetzagentur.de/SharedDocs/Pressemitteilung...
[2] https://www.t-online.de/heim-garten/aktuelles/id_100212010/s...
Well not even high skill - for "security" reasons and to prevent support issues as well as to skimp on testing needed informations are often only accessible to a chosen few.
Paradoxically the effect of thes "security" concerns often mean that there are plenty of easily exploited methods in devices like that. And the only people that have them are the ones that you need to worry about instead of some 16 year old teenager finding it and playing blinkenlights with his friends parents house causing trouble for him but getting the hard coded backdoor taken out after the media got wind of it.
If I was dictator of infrastructure I would ban any non-local two way communication and would mandate all small grid storage solutions run off a curve flattening model thats uniform and predictable. Basically they would store first and only be allowed to emit a fraction of their storage capacity to the grid afterwards. Maybe regulated by time of day.
While I agree that the important metric to consider is peak output and not average output, I would still guess that in a country like the Netherlands that peak output is nowhere near nameplate capacity.
You can get close to peak output just about anywhere, assuming the panels are angled rather than laying flat. You just can’t get it for very long in most locations.
The new method this past year that appears to be highly beneficial is to use various compass orientations of _vertically_ mounted panels. The solar cells got so cheap that every penny we spend on mounting hardware and rigid paneling now stings, and posts driven vertically into the ground which string cables tight between them are cheaper than triangles, way easier to maintain (especially in places with winter), and trade a lower peak (or even a bimodal peak) for a much wider production curve.
Tldr; We can't talk about proper numbers cos hackers.
The "bad iPhone bug" scenario happened a few weeks ago, in the form of Crowdstrike. You underestimated the damages.
you are splitting hairs about the wrong issue.
When it is sunny in the netherlands, it is likely sunny everywhere in NL because of how small the country is.
This is the situation where having so much solar power capacity (kW) is dangerous.
The risk scales with energy output but it would not term nameplate capacity a "completely useless metric".
I dunno. I lived next to a small inland sea most of my adult life. The number of times someone on the other side of town asserted it was raining when in fact it was not was quite high.
Every adult in Seattle eventually has to learn that if you have an activity planned on the other side of town, if you cancel it because it’s raining at your house you’re not going to get anything done. You have to phone a friend or just show up and then decide if you’re going to cancel due to weather.
Now to be fair, in the case of Seattle, there’s a mountain that multiplies this effect north versus south. NL doesn’t have that, but if you look at the weather satellite at the time of my writing, there are long narrow strips of precip over England that are taller but much narrower than NL.
clouds and rain do not behave the same as the sun.
What point are you trying to make here?
"Sometimes it rained in a part of town only" does not disprove the person saying "it can be sunny virtually everywhere at the same time in a small country"
For a simple demonstration, https://www.buienradar.nl/nederland/zon-en-wolken/wolkenrada... has been showing cloudless hours pretty regularly in the last month. Someone meaning malice can certainly keep an eye on that for a few days to find a good moment
Often friends of mine who live in my city report rain when I see none, or no rain when it's raining outside my window. That's to say nothing of a location 30km away, where basically anything can happen. Do we live on the same planet?
On which planet does the regular occurrence of one phenomenon disprove the regular occurrence of another?
It can both be true that weather is locally different on most days but coincides to be universally cloudless on a fair number of hours every late-summer month (easily within a reasonable waiting time for an attacker)
You are talking about energy, which is not the same thing as power. TWh == energy, GW == power.
The distinction is important, especially in the Netherlands, which has a capacity factor of only about 10%-15%, whereas most of the US will be at least 20%-25%, which is twice as high.
I'm not sure of the typical number of reactors in the Netherlands, but using the US average of 1.6/power plant may not be the most representative comparison.
I have no idea what you're talking about, since nowhere did I use solar capacity factor data nor did I look at number of reactors per plant.
You are using both with your energy generated numbers. That's where they come from.
Your solar TWh comes from 25GW at ~15% capacity factor, and to get your nuclear numbers you're looking at 1.6GW for each of nuclear "plants" when each reactor is usually about 1GW or less. There are ~90 reactors in the US, at 54 plants. The article is assuming 1 reactor per plant for the Netherlands.
Small addition that isn't mentioned in the English version of the article, but only in the original Dutch version: the article talks specifically about the Borssele power station [0] (which has a power output of 485MW).
[0]: https://en.wikipedia.org/wiki/Borssele_Nuclear_Power_Station
The point is about instant power injected, not energy, the point is that keep an AC grid at the right frequency it's a tricky business because energy production and consumption must match.
Too much from production the frequency skyrocket, little production the frequency plunge.
Now classic grids are designed on large areas to average the load for big power plants, this way those plant see small instantaneous change in their output demand, let's say a 50MW power plant see 100-300kW instantaneous change, that's something they can handle quick enough. With massive p.v., eolic etc grid demand might change MUCH more for big power plant, like a 50MW P.P. need to scale back or power up of 10MW suddenly and that's way too much to sustain. When this happen if the demand is too much the frequency plunge, grid dispatcher operators have to cut off large areas to lower the demand (so called rolling blackouts), when the demand drop too quickly the frequency skyrocket and large PP can't scale back fast enough so they simply disconnect. Disconnecting the generation fall and the frequency stabilize, unfortunately most p.v. is grid tied, if a p.p. disconnect most p.v. inverters who have seen the frequency spike disconnect as well creating a cascading effect of quickly alternating too low and too high frequency causing vast area blackouts.
Long story short a potential attack is simply planting a command "at solar noon of 26 June stop injecting to the grid, keep not injecting till solar noon + 5'", with just "1 second or so" (due to eventual time sync issues) all inverters of a certain brand might stop injecting, making the generation fall, a bit of rolling blackouts and large pp compensate quickly. Than the 5' counter stop, all inverters restart injecting en-masse, while the large pp are full power as well, the frequency skyrocket, large pp disconnect causing most grid-tied inverter to follow them, there are large change an entire geographic segment of a grid fall. Interconnection operators in such little time do not know what to do and quickly the blackout might became even larger with almost all interconnection going down to protect active parts of the grid, causing more frequency instability and so more blackouts.
Such attack might led to some days without power.
For solar panels, the nameplate capacity is usually also the power generated at the peak production time, which is the moment when an attacker turning off all inverters at the same time would have the most impact.
That is: for an attack (or any other failure), the most important metric is not the total power produced, but the instantaneous power production, which is the amount which has to be absorbed by the "spinning reserve" of other power plants when one power plant suddenly goes offline.
No, the nameplate capacity is what a solar panel will produce under perfect lighting, independent of the site where it's installed.
The peak theoretical power output of a solar panel depends on where it's installed, inclination, temperature, elevation, and so on. The actual peak power is going to take weather and dirty panels into account.
1kw nameplate in Ireland (or the Netherlands) is never going to give you an instantaneous 1kw output -- you're going to be lucky to see 60% of that.
But 60% of 25GW is less than 3GW? You need to take down more capacity than the buffer and power plants will disconnect from grid for fail-safe. Bringing grid up alone maybe will take days or few, but all the appliances out there will be down for weeks.. ClownStrike brought us pen-written boarding passes, glad we don’t install crapware on hospitals hardware
No. You will definitely not get peak capacity even in the sahara. They got those numbers under perfect conditions in a laboratory, not under real circumstances.
If memory serves, and I’ll admit it’s pretty fuzzy, the US tends to make ridiculously large nuclear reactors and Europe has an easier regulatory situation so they make more of them and smaller.
So in addition to the other stuff people mentioned, you might be off by another factor of 2 there. They also said “medium sized” so let’s call it 3.
This might have been true back in the 1970s, but at least as far as current development goes, is not.
The only new (non-Russian) European design built in the past 15 years is the EPR at 1600 MW. The only new American design built in the past 15 years is the AP1000 which as the name suggests is 1000 MW (technically 1100). AP1000 uses a massively simplified design to try and be much safer than other designs (NRC calculations say something like an order of magnitude) but is not cost competitive against most other forms of power generation. Which is why after Vogtle 3 and 4 there are no plans for more of them in the US.
It's not that EPR is any better- they are actually doing worse in terms of money and time slippage than Vogtle did. Flamanville 3 had it's first concrete poured in 2007 and still hasn't generated a single net watt!
It turns out that the pause in building nuclear reactors in the west from about 1995-2005- both US (which actually was longer, from the early 1980's, after 3 Mile Island things still under construction were finished but nothing new was built) and Western Europe (after Chernobyl following a similar path) basically gutted the nuclear construction industries in both, and they haven't built back up. The Russians kept at it, and the South Koreans have moved in to the market (and China is building a huge number domestically, though I don't think they've built any internationally), but Western Europe and the US are far behind, and after Fukushima Daiishi I strongly suspect the Japanese are in the same boat. Without the trained workers you can't build these in any predictable way, and when you pause construction for a decade you lose all of the trained workers and it's really hard to build that workforce back up again.
Not only that, solar is entirely misaligned with power requirements. Over the year it may be 1.5 over nuclear, but in the winter, when demand is highest, the amount of energy provided will be far less, on account of short days and low light - typically you get 1/10 of the energy in winter that you do in comparison to a summer day. So overproduction when unrequired, underproduction when required.
It's the power output that is relevant for the failure mode described in the article, not the yearly production. And in terms of power output, 20GW is an incredibly common number for peak solar production (see e.g. https://energieopwek.nl/ at the end of Jul this year) in summer. Borssele (the medium-sized power plant named in the article) has a 485MWe net output. So yes, we _are_ talking about >25 mid-sized nuclear power plants!
Isn't latitude taken into account by grid operators for determining their expected peak output? The owners would otherwise be installing bigger (more expensive) converters than needed, so they'd know this value at least roughly. Even smarter would be to include the angle etc. but not sure what detail it goes into compared to latitude which is a very well-known and exceedingly easy to look up value for an area
I certainly see your point about it not being apples to apples, but on a cloudless summer day, the output afaik genuinely would be the stated figure (less degradation and capacity issues). The country is small enough that it's also not unlikely that we all have a cloudless day at the same time
One might well expect some sun in summer and put some of the used-in-winter gas works into maintenance or, in the future, count on summer production of hydrogen— although hacks are likely a transient issue so I wouldn't foresee significant problems there