return to table of content

GraphCast: AI model for weather forecasting

meteo-jeff
39 replies
23h45m

In case someone is looking for historical weather data for ML training and prediction, I created an open-source weather API which continuously archives weather data.

Using past and forecast data from multiple numerical weather models can be combined using ML to achieve better forecast skill than any individual model. Because each model is physically bound, the resulting ML model should be stable.

See:https://open-meteo.com

Fatnino
8 replies
19h22m

Is there somewhere to see historical forecasts?

So not "the weather on 25 December 2022 was such and such" but rather "on 20 December 2022 the forecast for 25 December 2022 was such and such"

berniedurfee
4 replies
15h6m

I’ve always wanted to see something like that. I always wonder if forecasts are a coin flip beyond a window of a few hours.

mjs
1 replies
6h12m

Looks likehttps://sites.research.google/weatherbench/attempts to "benchmark" different forecast models/systems.

They're very cautious about naming a "best" model though!

Weather forecasting is a multi-faceted problem with a variety of use cases. No single metric fits all those use cases. Therefore,it is important to look at a number of different metrics and consider how the forecast will be applied.
rrr_oh_man
0 replies
2h9m

That last paragraph sounds like something ChatGPT would write.

CSMastermind
0 replies
14h36m

I know at a minimum that hurricane forecasts have gotten significantly better over time. We can now

https://www.nhc.noaa.gov/verification/verify5.shtml

Our 96 hour projections are as accurate today as the 24 hour projections were in 1990.

AuryGlenz
0 replies
1h17m

I just quit photographing weddings (and other stuff) this year. It's a job where the forecast really impacts you, so you tend to pay attention.

The amount of brides I've had to calm down when rain was forecast for their day is pretty high. In my experience, in my region, precipitation forecasts more than 3 days out are worthless except for when it's supposed to rain for several days straight. Temperature/wind is better but it can still swing one way or the other significantly.

For other types of shoots I'd tell people that ideally we'd postpone on the day of, and only to start worrying about it the day before the shoot.

I'm in Minnesota, so our weather is quite a bit more dynamic than many regions, for what it's worth.

jjp
1 replies
6h18m

Are you thinking something likehttps://www.forecastadvisor.com/?

meteo-jeff
0 replies
3h52m

I would like to see an independent forecast comparison tool similar to Forecast Advisor, which evaluates numerical weather models. However, getting reliable ground truth data on a global scale can be a challenge.

Since Open-Meteo continuously downloads every weather model run, the resulting time series closely resembles assimilated gridded data. GraphCast relies on the same data to initialize each weather model run. By comparing past forecasts to future assimilated data, we can assess how much a weather model deviates from the "truth," eliminating the need for weather station data for comparison. This same principle is also applied to validate GraphCast.

Moreover, storing past weather model runs can enhance forecasts. For instance, if a weather model consistently predicts high temperatures for a specific large-scale weather pattern, a machine learning model (or a simple multilinear regression) can be trained to mitigate such biases. This improvement can be done for a single location with minimal computational effort.

meteo-jeff
0 replies
18h51m

Not yet, but I am working towards it:https://github.com/open-meteo/open-meteo/issues/206

mdbmdb
6 replies
23h12m

Is it able to provide data on extreme events. Say, the current and potential path of a hurricane? similar to .kml that NOAA provides

meteo-jeff
5 replies
22h52m

Extreme weather is predicted by numerical weather models. Correctly representing hurricanes has driven development on the NOAA GFS model for centuries.

Open-Meteo focuses on providing access to weather data for single locations or small areas. If you look at data for coastal areas, forecast and past weather data will show severe winds. Storm tracks or maps are not available, but might be implemented in the future.

mdbmdb
1 replies
22h3m

Appreciate the response. Do you know of any services that provide what I described in the previous comments? I'm specifically interested in extreme weather conditions and their visual representation (hurricanes, tornados, hails etc.) with API capabilities

swells34
0 replies
19h19m

Go to: nhc.noaa.gov/gis There's a list of data and products with kmls and kmzs and geojsons and all sorts of stuff. I haven't actually used the API for retrieving these, but NOAA has a pretty solid track record with data dissemination.

dmd
1 replies
20h55m

I would love to hear about this centuries-old NOAA GFS model. The one I know about definitely doesn't have that kind of history behind it.

K2h
0 replies
20h49m

Some of the oldest data may come from ships logs back to 1836

https://www.reuters.com/graphics/CLIMATE-CHANGE-ICE-SHIPLOGS...

meteo-jeff
0 replies
20h50m

Sorry, decades.

KML files for storm tracks are still the best way to go. You could calculate storm tracks yourself for other weather models like DWD ICON, ECMWF IFS or MeteoFrance ARPEGE, but storm tracks based on GFS ensembles are easy to use with sufficient accuracy

Vagantem
3 replies
19h16m

That’s awesome! I’ve hooked something similar up to my service -https://dropory.comwhich predicts which day it will rain the least for any location

Based on historical data!

polygamous_bat
2 replies
17h47m

Yikes, after completed three steps I was asked for my email. No to your bait and switch, thanks!

Vagantem
1 replies
17h11m

It can take up to 10 min to generate a report - I had a spinner before but people just left the page. So I implemented a way to send it to them instead. I’ve never used the emails for anything else than that. Try it with a 10 min disposable email address if you like. Thanks for your feedback!

polygamous_bat
0 replies
13h0m

Ok, seems like your UI is not coming from a place of malice. However, pulling out an email input form at the final step is a very widespread UI dark pattern, so if nothing else please let people know that you will ask their email before they start interacting with your forms.

tomaskafka
2 replies
5h5m

I confirm, open-meteo is awesome and has a great API (and API playground!). And is the only source I know to offer 2 weeks of hourly forecasts (I understand at that point they are more likely to just show a general trend, but it still looks spectacular).

It's a pleasure being able to use it inhttps://weathergraph.app

brahbrah
1 replies
3h12m

And is the only source I know to offer 2 weeks of hourly forecasts

Enjoy the data directly from the source producing them.

American weather agency:https://www.nco.ncep.noaa.gov/pmb/products/gfs/

European weather agency:https://www.ecmwf.int/en/forecasts/datasets/open-data

The data’s not necessarily east to work with, but it’s all there, and you get all the forecast ensembles (potential forecasted weather paths) too

tomaskafka
0 replies
1h44m

Thank you, I didn't know! I'd love to, but I'd need another 24 hours in a day to also process the data - I'm glad I can build on a work of others and use the friendly APIs :).

brna
2 replies
8h6m

Hi Jeff, Great work, Respect!

I just hit the daily limit on the second request athttps://climate-api.open-meteo.com/v1/climate

I see the limit for non-commercial use should be "less than 10.000 daily API calls". Technically 2 is less than 10.000, I know, but still I decided to drop you a comment. :)

wodenokoto
1 replies
4h36m

10.000 requests / (24 hours * 60 minutes * 60 seconds) = 0.11 requests / second

or 1 request every ~9 seconds.

Maybe you just didn't space them enough.

brna
0 replies
2h50m

Maybe, that would be funny. ~7 requests per minute would be a more dev-friendly way of enforcing the same quota.

willsmith72
1 replies
19h8m

this is really cool, I've been looking for good snow-related weather APIs for my business. I tried looking on the site, but how does it work, being coordinates-based?

I'm used to working with different weather stations, e.g. seeing different snowfall prediction at the bottom of a mountain, halfway up, and at the top, where the coordinates are quite similar.

ryanlitalien
0 replies
1h27m

You'll need a local weather expert to assist, as terrain, geography and other hyper-local factors create forecasting unpredictability. For example, Jay Peak in VT has its own weather, the road in has no snow, but it's a raging snowstorm on the mountain.

comment_ran
1 replies
19h55m

How abouthttps://pirateweather.net/en/latest/?

Does anyone have a compare this API with the latest API we have here?

meteo-jeff
0 replies
19h21m

Both APIs use weather models from NOAA GFS and HRRR, providing accurate forecasts in North America. HRRR updates every hour, capturing recent showers and storms in the upcoming hours. PirateWeather gained popularity last year as a replacement for the Dark Sky API when Dark Sky servers were shut down.

With Open-Meteo, I'm working to integrate more weather models, offering access not only to current forecasts but also past data. For Europe and South-East Asia, high-resolution models from 7 different weather services improve forecast accuracy compared to global models. The data covers not only common weather variables like temperature, wind, and precipitation but also includes information on wind at higher altitudes, solar radiation forecasts, and soil properties.

Using custom compression methods, large historical weather datasets like ERA5 are compressed from 20 TB to 4 TB, making them accessible through a time-series API. All data is stored in local files; no database set-up required. If you're interested in creating your own weather API, Docker images are provided, and you can download open data from NOAA GFS or other weather models.

Omnipresent
1 replies
14h15m

This is great. I am very curious about the architectural decisions you've taken here. Is there a blog post / article about them? 80 yrs of historical data -- are you storing that somewhere in PG and the APIs are just fetching it? If so, what indices have you set up to make APIs fetch faster etc. I just fetched 1960 to 2022 in about 12 secs.

meteo-jeff
0 replies
8h41m

Traditional database systems struggle to handle gridded data efficiently. Using PG with time-based indices is memory and storage extensive. It works well for a limited number of locations, but global weather models at 9-12 km resolution have 4 to 6 million grid-cells.

I am exploiting on the homogeneity of gridded data. In a 2D field, calculating the data position for a graphical coordinate is straightforward. Once you add time as a third dimension, you can pick any timestamp at any point on earth. To optimize read speed, all time steps are stored sequentially on disk in a rotated/transposed OLAP cube.

Although the data now consists of millions of floating-point values without accompanying attributes like timestamps or geographical coordinates, the storage requirements are still high. Open-Meteo chunks data into small portions, each covering 10 locations and 2 weeks of data. Each block is individually compressed using an optimized compression scheme.

While this process isn't groundbreaking and is supported by file systems like NetCDF, Zarr, or HDF5, the challenge lies in efficiently working with multiple weather models and updating data with each new weather model run every few hours.

You can find more information here:https://openmeteo.substack.com/i/64601201/how-data-are-store...

Guestmodinfo
1 replies
10h25m

I always suspect that they don't tell me the actual temperature. Maybe I am totally wrong but I suspect. I need to get my own physical thermometer not the digital one in my room and outside my house and have a camera focussed on it. So that later I can speed up the video and see how much the weather varied the previous night.

kubiton
0 replies
9h31m

What? Why?

just_testing
0 replies
17h13m

I was going to ask about air quality, but just opened the site and you have air quality as well! Thanks!

caseyf7
0 replies
12h23m

How did you handle missing data? I’ve used NOAA data a few times and I’m always surprised at how many days of historical data are missing. They have also stopped recording in certain locations and then start in new locations over time making it hard to get solid historical weather information.

boxed
0 replies
23h27m

Open-Meteo has a great API too. I used it to build my iOS weather app Frej (open source and free:https://github.com/boxed/frej)

It was super easy and the responses are very fast.

_visgean
0 replies
15h52m

There is alsohttps://github.com/google-research/weatherbench2which has baselines of numerical weather models.

3abiton
0 replies
8h35m

Are multiple data sources supported?

serjester
18 replies
23h49m

To call this impressive is an understatement. Using a single GPU, outperforms models that run on the world's largest super computers. Completely open sourced - not just model weights. And fairly simple training / input data.

... with the current version being the largest we can practically fit under current engineering constraints, but which have potential to scale much further in the future with greater compute resources and higher resolution data.

I can't wait to see how far other people take this.

wenc
16 replies
23h36m

It builds on top of supercomputer model output and does better at the specific task of medium term forecasts.

It is a kind of iterative refinement on the data that supercomputers produce — it doesn’t supplant supercomputers. In fact the paper calls out that it has a hard dependency on the output produced by supercomputers.

carbocation
6 replies
23h10m

I don't understand why this is downvoted. This is a classic thing to do with deep learning: take something that has a solution that is expensive to compute, and then train a deep learning model from that. And along the way, your model might yield improvements, too, and you can layer in additional features, interpolate at finer-grained resolution, etc. If nothing else, the forward pass in a deep learning model is almost certainly way faster than simulating the next step in a numerical simulation, but there is room for improvement as they show here. Doesn't invalidate the input data!

danielmarkbruce
3 replies
22h17m

Because "iterative refinement" is sort of wrong. It's not a refinement and it's not iterative. It's an entirely different model to physical simulation which works entirely differently and the speed up is order of magnitude.

Building a statistical model to approximate a physical process isn't a new idea for sure.. there are literally dozens of them for weather.. the idea itself isn't really even iterative, it's the same idea... but it's all in the execution. If you built a model to predict stock prices tomorrow and it generated 1000% pa, it wouldn't be reasonable for me to call it iterative.

kridsdale3
1 replies
21h59m

It is iterative when you look at the scope of "humans trying to solve things over time".

danielmarkbruce
0 replies
21h20m

lol, touche.

andbberger
0 replies
21h30m

"amortized inference" is a better name for it

borg16
1 replies
22h9m

the forward pass in a deep learning model is almost certainly way faster than simulating the next step in a numerical simulation

Is this the case in most of such refinements (architecture wise)?

danielmarkbruce
0 replies
21h13m

Practically speaking yes. You'd not likely build a statistical model when you could build a good simulation of the underlying process if the simulation was already really fast and accurate.

silveraxe93
3 replies
23h1m

Could you point me to the part where it says it depends on supercomputer output?

I didn't read the paper but the linked post seems to say otherwise? It mentions it used the supercomputer output to impute data during training. But for prediction it just needs:

For inputs, GraphCast requires just two sets of data: the state of the weather 6 hours ago, and the current state of the weather. The model then predicts the weather 6 hours in the future. This process can then be rolled forward in 6-hour increments to provide state-of-the-art forecasts up to 10 days in advance.
serjester
2 replies
22h54m

You can read about it more in their paper. Specifically page 36. Their dataset, ERA5, is created using a process called reanalysis. It combines historical weather observations with modern weather models to create a consistent record of past weather conditions.

https://storage.googleapis.com/deepmind-media/DeepMind.com/B...

silveraxe93
0 replies
22h48m

Ah nice. Thanks!

dekhn
0 replies
20h57m

I can't find the details, but if the supercomputer job only had to run once, or a few times, while this model can make accurate predictions repeatedly on unique situations, then it doesn't matter as much that a supercomputer was required. The goal is to use the supercomputer once, to create a high value simulated dataset, then repeatedly make predictions from the lower-cost models.

pkulak
2 replies
22h16m

Why can't they just train on historical data?

xapata
0 replies
21h55m

We don't have enough data. There's only one universe, and it's helpful to train on counter-factual events.

_visgean
0 replies
16h15m

ERA5 is based on historical data. See it for yourselfhttps://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysi...,https://www.ecmwf.int/en/forecasts/dataset/ecmwf-reanalysis-...

I don't using raw historical data would work for any data intensive model - afaik the data is patchy - there are spots where we don't have that many datapoints - e.g. middle of ocean... Also there are new satelites that are only available for the last x years and you want to be able to use these for the new models. So you need a re-analysis of what it would look like if you had that data 40 years ago...

Also its very convinient dataset because many other models trained on it:https://github.com/google-research/weatherbench2so easy to do benchmarking..

whatever1
0 replies
22h39m

So best case scenario we can avoid some computation for inference, assuming that historical system dynamics are still valid. This model needs to be constantly monitored by full scale simulations and rectified over time.

westurner
0 replies
22h17m

"BLD,ENH: Dask-scheduler (SLURM,),"https://github.com/NOAA-EMC/global-workflow/issues/796

Dask-jobqueuehttps://jobqueue.dask.org/:

provides cluster managers for PBS, SLURM, LSF, SGE and other [HPC supercomputer] resource managers

Helpful tools for this work: Dask-labextension, DaskML, CuPY, SymPy's lambdify(), Parquet, Arrow

GFS: Global Forecast System:https://en.wikipedia.org/wiki/Global_Forecast_System

TIL about Raspberry-NOAA and pywws in researching and summarizing for a comment on "Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle" (2023)https://news.ycombinator.com/item?id=38158091

thatguysaguy
0 replies
23h36m

They said single TPU machine to be fair, which means like 8 TPUs (still impressive)

pyb
14 replies
21h7m

Curious. How can AI/ML perform on a problem that is, as far as I understand, inherently chaotic / unpredictable ? It sounds like a fundamental contradiction to me.

keule
8 replies
21h2m

IMO a chaotic system will not allow for long-term forecast, but if there is any type of pattern to recognize (and I would assume there are plenty), an AI/ML model should be able to create short-term prediction with high accuracy.

pyb
3 replies
21h0m

Not an expert, but "Up to 10 days in advance" sounds like long-term to me ?

joaogui1
1 replies
20h55m

I think 10 days is basically the normal term for weather, in that we can get decent predictions for that span using "classical"/non-ML methods.

pyb
0 replies
20h49m

IDK, I wouldn't plan a hike in the mountains based on 10-day predictions.

keule
0 replies
20h25m

To be clear: With short-term I meant the mentioned 6 hours of the article. They use those 6 hours to create forecasts for up to 10 days. I would think that the initial predictors for a phenomenon (like a hurricane) are well inside that timespan. With long-term, I meant way beyond a 14-day window.

kouru225
3 replies
20h59m

But AI/ML models require good data and the issue with chaotic systems like weather is that we don’t have good enough data.

joaogui1
2 replies
20h53m

The issue with chaotic systems is not data, is that the error grows superlinearly with time, and since you always start with some kind of error (normally due to measurement limitations) this means that after a certain time horizon the error becomes to significant to trust the prediction. That hasn't a lot to do with data quality for ML models

kouru225
1 replies
20h51m

That’s an issue with data: If your initial conditions are wrong (Aka your data collection has any error or isn’t thorough enough) then you get a completely different result.

nl
0 replies
17h40m

Every measurement has inherent errors in it - and those errors are large if the task is to measure the location and velocity of every molecule in the atmosphere.

You also need to measure the exact amount of solar radiation before it hits these molecules (which is impossible, so we assume this is constant depending on latitude and time)

These errors compound (the butterfly effect) which is why we can't get perfect predictions.

This is a limit inherent in physical systems because of physics, not really a data problem.

vosper
1 replies
21h2m

Weather isn’t fundamentally unpredictable. We predict weather with a fairly high degree of accuracy (for most practical uses), and the accuracy getting better all the time.

https://scijinks.gov/forecast-reliability

sosodev
0 replies
20h20m

I'm kinda surprised that this government science website doesn't seem to link sources. I'd like to read the research to understand how they're measuring the accuracy.

kouru225
1 replies
21h1m

Yes. Very accurate as long as you don’t need to predict the unpredictable. So it’s useless.

Edit: I do see a benefit to the idea if you compare it to the Chaos Theorists “gaining intuition” about systems.

pyb
0 replies
20h48m

IDK if it's useless, but it's counter-intuitive to me.

crazygringo
0 replies
17h13m

Because there are tons of parts of weather where chaosisn'tthe limiting factor currently.

There are a limited number of weather stations producing measurements, and a limited "cell size" for being able to calculate forecasts quickly enough, and geographical factors that aren't perfectly accounted for in models.

AI is able to help substantially with all of these -- from interpolation to computational complexity to geography effects.

xnx
8 replies
1d1h

I continue to be a little confused by the distinction between Google, Google Research and DeepMind. Google Research, had made this announcement about 24-hour forecasting just 2 weeks ago:https://blog.research.google/2023/11/metnet-3-state-of-art-n...(which is also mentioned in the GraphCast announcement from today)

mukara
7 replies
1d

DeepMind recently merged with the Brain team from Google Research to form `Google DeepMind`. It seems this was done to have Google DeepMind focused primarily (only?) on AI research, leaving Google Research to work on other things in more than 20 research areas. Still, some AI research involves both orgs, including MetNet in weather forecasting.

In any case, GraphCast is a 10-day global model, whereas MetNet is a 24-hour regional model, among other differences.

xnx
4 replies
1d

Good explanation. Now that both the 24-hour regional and 10-day global models have been announced in technical/research detail, I supposed there might still be a general blog post about how improved forecasting is when you search for "weather" or check the forecast on Android.

kridsdale3
2 replies
21h56m

IIRC the MetNet announcement a few weeks ago said that their model is now used when you literally Google your local weather. I don't think it's available yet to any API that third party weather apps pull from, so you'll have to keep searching "weather in Seattle" to see it.

wenyuanyu
0 replies
5h36m

Any idea why it is still showing the "weather.com" link next to the forecast?

daemonologist
0 replies
20h24m

It's also used, at least for the high resolution precipitation forecast, in the default Android weather app (which is really part of the "Google" app situation).

mnky9800n
0 replies
23h53m

That would require your local weather service to use these models

danielmarkbruce
1 replies
23h58m

Is there a colab example (and/or have they released the models) for MetNet like they have here for GraphCast?

mukara
0 replies
23h49m

MetNet-3 is not open-source, and the announcement said it's already integrated into Google products/services needing weather info. So, I'd doubt there's anything like a colab example.

lispisok
8 replies
1d

I've been following these global ML weather models. The fact they make good forecasts at all was very impressive. What is blowing my mind is how fast they run. It takes hours on giant super computers for numerical weather prediction models to forecast the entire globe. These ML models are taking minutes or seconds. This is potentially huge for operational forecasting.

Weather forecasting has been moving focus towards ensembles to account for uncertainty in forecasts. I see a future of large ensembles of ML models being ran hourly incorporating the latest measurements

wenc
4 replies
23h59m

Not to take away from the excitement but ML weather prediction builds upon the years of data produced by numerical models on supercomputers. It cannot do anything without that computation and its forecasts are dependent on the quality of that computation. Ensemble models are already used to quantify uncertainty (it’s referenced in their paper).

But it is exciting that they are able to recognize patterns in multi year and produce medium term forecasts.

Some comments here suggest this replaces supercomputers models. This would a wrong conclusion.It does not (the paper explicitly states this). It uses their output as input data.

boxed
3 replies
23h25m

I don't get this. Surely past and real weather should be the input training data, not the output of numerical modeling?

counters
2 replies
23h14m

Well, what is "real weather data?"

We have dozens of complementary and contradictory sources of weather information. Different types of satellites measuring EM radiation in different bands, weather stations, terrestrial weather radars, buoys, weather balloons... it's a massive hodge-podge of different systems measuring different things in an uncoordinated fashion.

Today, it's not really practical to assemble that data and directly feed it into an AI system. So the state-of-the-art in AI weather forecasting involves using an intermediate representation - "reanalysis" datasets which apply a sophisticated physics based weather model to assimilate all of these data sets into a single, self-consistent 3D and time-varying record of the state of the atmosphere. This data is the unsung hero of the weather revolution - just as the WMO's coordinated synoptic time observations for weather balloons catalyzed effective early numerical weather prediction in the 50's and 60's, accessible re-analysis data - and the computational tools and platforms to actually work with these peta-scale datasets - has catalyzed the advent of "pure AI" weather forecasting systems.

goosinmouse
0 replies
22h33m

Great comment, thank you for sharing your insights. I don't think many people truly understand just how massive these weather models are and the sheer volume of data assimilation work that's been done for decades to get us to this point today.

I always have a lot of ideas about using AI to solve very small scale weather forecasting issues, but there's just so much to it. It's always a learning experience for sure.

boxed
0 replies
9h55m

Oh yea, sure. But the article makes it seems like the model is trained on some predictive model, instead of a synthesis model. That seems weird to me.

mnky9800n
0 replies
23h46m

It uses era5 data which is reanalysis. These models will always need the numerical training data. What's impressive is how well the emulate the physics in those models so cheaply. But since the climate changes there will eventually be different weather in different places.

https://www.ecmwf.int/en/forecasts/documentation-and-support

kridsdale3
0 replies
21h54m

This is basically equivalent to NVIDIA's DLSS machine learning running on Tensor Cores to "up-res" or "frame-interpolate" the extremely computationally intensive job the traditional GPU rasterizer does to simulate a world.

You could numerically render a 4k scene at 120FPS at extreme cost, or you could render a 2k scene at 60FPS, then feed that to DLSS to get a close-enough approximation of the former at enormous energy and hardware savings.

counters
0 replies
23h33m

Absolutely - but large ensembles are just the tip of the iceberg. Why bother producing an ensemble when you could just output the posterior distribution of many forecast predictands on a dense grid? One could generate the entire ensemble-derived probabilities from a single forward model run.

Another very cool application could incorporate generative modeling. Inject a bit of uncertainty in a some observations and study how the manifold of forecast outputs changes... ultimately, you could tackle things like studying the sensitivity of forecast uncertainty for, say, a tropical cyclone or nor'easter relative to targeted observations. Imagine a tool where you could optimize where a Global Hawk should drop rawindsondes over the Pacific Ocean to maximally decrease forecast uncertainty for a big winter storm impacting New England...

We may not be able to engineer the weather anytime soon, but in the next few years we may have a new type of crystal ball for anticipating its nuances with far more fidelity than ever before.

brap
8 replies
20h36m

Beyond the difficulty of running calculations (or even accurately measuring the current state), is there a reason to believe weather is unpredictable?

I would imagine we probably have a solid mathematical model of how weather behaves, so given enough resources to measure and calculate, could you, in theory, predict the daily weather going 10 years into the future? Or is there something inherently “random” there?

counters
4 replies
20h26m

What you're describing is effectively how climate models work; we run a physical model which solves the equations that govern how the atmosphere works out forward in time for very long time integrations. You get "daily weather" out as far as you choose to run the model.

But this isn't a "weather forecast." Weather forecasting is an initial value problem - you care a great deal about how the weather will evolve from the current atmospheric conditions. Precisely because weather is a result of what happens in this complex, 3D fluid atmosphere surrounding the Earth, it happens that small changes in those initial conditions can have a very big impact on the forecast on relatively short time-periods - as little as 6-12 hours. Small perturbations grow into larger ones and feedback across spatial scales. Ultimately, by day ~3-7, you wind up with a very different atmospheric state than what you'd have if you undid those small changes in the initial conditions.

This is the essence of what "chaos" means in the context of weather prediction; we can't perfectly know the initial conditions we feed into the model, so over some relatively short time, the "model world" will start to look very different than the "real world." Even if we had perfect models - capable of representing all the physics in the atmosphere - we'd still have this issue as long as we had to imperfectly sample the atmosphere for our initial conditions.

So weather isn't inherently "unpredictable." And in fact, by running lots of weather models simultaneously with slightly perturbed initial conditions, we can suss out this uncertainty and improve our estimate of the forecast weather. In fact, this is what's so exciting to meteorologists about the new AI models - they're so much cheaper to run that we can much more effectively explore this uncertainty in initial conditions, which will indirectly lead to improved forecasts.

willsmith72
1 replies
19h2m

is it possible to self-correct, looking at initial value errors in the past? Is it too hard to prescribe the error in the initial value?

counters
0 replies
17h30m

Yes, this is effectively what 4DVar data assimilation is [1]. But it's very, very expensive to continually run new forecasts with re-assimilated state estimates. Actually, one of the _biggest_ impacts that models like GraphCast might have is providing a way to do exactly this - rapidly re-running the forecast in response to updated initial conditions. By tracking changes in the model evolution over subsequent re-initializations like this, one could might be able to better quantify expected forecast uncertainty, even moreso than just by running large ensembles.

Expect lots of R&D in this area over the next two years...

[1]:https://www.ecmwf.int/en/about/media-centre/news/2022/25-yea...

brap
1 replies
2h51m

So isn’t it just a problem of measurement then?

Say you had a massive array of billions of perfect sensors in different locations, and had all the computing power to process this data, would an N year daily forecast then be a solved problem?

For the sake of the argument I’m ignoring ”external” factors that could affect the weather (e.g meteors hitting earth, changes in man-made pollution, etc)

counters
0 replies
2h28m

At that point you're slipping into Laplace's Demon.

In practical terms, we see predictability horizons get _shorter_ when we increase observation density and spatial resolution of our models, because more, small errors from slightly imperfect observations and models still cascade to larger scales.

ethanbond
0 replies
20h29m

AFAIK there's nothingrandomanywhere except near atomic/subatomic scale. Everything else is just highly chaotic/hard-to-forecast deterministic causal chains.

danbrooks
0 replies
20h33m

Small changes in initial state can lead to huge changes down the line. See: the butterfly effect or chaos theory.

https://en.wikipedia.org/wiki/Chaos_theory

_visgean
0 replies
16h7m

Seehttps://en.wikipedia.org/wiki/Numerical_weather_prediction

Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain.

I think there is a hope that DL models wont have this problem.

robertlagrant
7 replies
1d

This is fascinating:

For inputs, GraphCast requires just two sets of data: the state of the weather 6 hours ago, and the current state of the weather. The model then predicts the weather 6 hours in the future. This process can then be rolled forward in 6-hour increments to provide state-of-the-art forecasts up to 10 days in advance.
counters
1 replies
23h40m

It's worth pointing out that "state of the weather" is a little bit hand-wavy. The GraphCast model requires a fully-assimilated 3D atmospheric state - which means you still need to run a full-complexity numerical weather prediction system with a massive amount of inputs to actually get to the starting line for using this forecast tool.

Initializing directly from, say, geostationary and LEO satellite data with complementary surface station observations - skipping the assimilation step entirely - is clearly where this revolution is headed, but it's very important to explicitly note that we're not there yet (even in a research capacity).

baq
0 replies
22h5m

Yeah current models aren’t quite ready to ingest real time noisy data like the actual weather… I hear they go off the rails if preprocessing is skipped (outliers, etc)

broast
1 replies
23h54m

Weather is markovian

hakuseki
0 replies
14h19m

That is not strictly true. The weather at time t0 may affect non-weather phenomena at time t1 (e.g. traffic), which in turn may affect weather at time t2.

Furthermore, a predictive model is not working with a complete picture of the weather, but rather some limited-resolution measurements. So, even ignoring non-weather, there may be local weather phenomena detected at time t0, escaping detection at time t1, but still affecting weather at time t2.

Al-Khwarizmi
1 replies
23h40m

I don't know much about weather prediction, but if a model can improve the state of the art only with that data as input, my conclusion is that previous models were crap... or am I missing something?

postalrat
0 replies
22h57m

Read the other comments.

Imanari
0 replies
23h48m

Interesting indeed, only one lagged feature for time series forecasting? I’d imagine that including more lagged inputs would increase performance. Rolling the forecasts forward to get n-step-ahead forecasts is a common approach. I’d be interested in how they mitigated the problem of the errors accumulating/compounding.

amluto
6 replies
23h28m

I've never studied weather forecasting, but I can't say I'm surprised. All of these models, AFAICT, are based on the "state" of the weather, but "state" deserves massive scare quotes: it's a bunch of 2D fields (wind speed, pressure, etc) -- note the2D. Actual weather dynamics happen in three dimensions, and three dimensional land features, buildings, etc as well as gnarly 2D surface phenomena (ocean surface temperature, ground surface temperature, etc) surely have strong effects.

On top of this, surely the actual observations that feed into the model are terrible -- they come from weather stations, sounding rockets, balloons, radar, etc, none of which seem likely to be especially accurate in all locations. Except that, where a weather station exists, the output of that stationisthe observation that people care about -- unless you're in an airplane, you don't personally care about the geopotential, but you do care about how windy it is, what the temperature and humidity are, and how much precipitation there is.

ISTM these dynamics ought to be better captured by learning them from actual observations than from trying to map physics both ways onto the rather limited datasets that are available. And a trained model could also learn about the idiosyncrasies of the observation and the extra bits of forcing (buildings, etc) that simply are not captured by the inputs.

(Heck, my personal in-my-head neural network can learn a mapping from NWS forecasts to NWS observations later in the same day that seems better than what the NWS itself produces. Surely someone could train a very simple model that takes NWS forecasts as inputs and produces its estimates of NWS observations during the forecast period as outputs, thus handling things like "the NWS consistently underestimates the daily high temperature at such-and-such location during a summer heat wave.")

WhitneyLand
3 replies
22h17m

How does it make sense to say this is something you’ve “never studied”, followed by how they “ought to be” doing it better?

It also seems like some of your facts differ from theirs, may I ask how far you read into the paper?

amluto
1 replies
18h8m

I read a decent amount of the paper, although not the specific details of the model they used. And when I say I "never studied" it, I mean that I never took a class or read a textbook. I do, in fact, know something about physics and fluids, and I have even personally done some fluid simulation work.

There are perfectly good models for weather in an abstract sense: Navier-Stokes plus various chemical models plus heat transfer plus radiation plus however you feel like modeling the effect of the ground and the ocean surface. (Or use Navier-Stokes for the ocean too!)

But this iswildlyimpractical. The Earth is too big. The relevant distance and time scales are pretty short, and the resulting grid would be too large. Not to mention that we have no way of actually measuring the whole atmosphere or even large sections of it in its full 3D glory in anything remotely close to the necessary amount of detail.

Go read the Wikipedia article, and contemplate the "Computation" and "Parameterization" sections. This works, but it's horrible. It's doing something akin to making an effective theory (the model actually solved) out of a larger theory (Navier-Stokes+), but we can't even measure the fields in the effective theory. We might want to model a handful of fields at 0.25 degrees (of lat/long) resolution, but we're getting the data from a detailed vertical slice every time someone launches a weather balloon. Which happens quite frequently, but not continuously and not at 0.25 degree spatial increments.

Hence my point: Google's model is sort oflearningan effective theory instead of developing one from first principles based on the laws of physics and chemistry.

edit: I once worked in a fluid dynamics lab on something that was a bit analogous. My part of the lab was characterizing actual experiments (burning liquids and mixing of gas jets). Another group was trying to simulate related systems on supercomputers. (This was a while ago. The supercomputers were not very capable by modern standards.)

The simulation side used a 3D grid fine enough (hopefully) to capture the relevant dynamics but not so fine that the simulation would never finish. Meanwhile, we measured everything in 1D 2D! We took pictures and videos with cameras at various wavelengths. We injected things into the fluids for better visualization. We measured the actual velocity atonelocation (with decent temporal resolution) and hoped our instrumentation for that didn’t mess up the experiment too much. We tried to arrange to know the pressure field in the experiment by setting it up right.

With the goal ofunderstandingthe phenomena, I think this was the right approach. But if we just wanted to predict future frames of video from past frames, I would expect a nice ML model to work better. (Well, I would expect it to work betternow. The state of the art was not so great at the time.)

counters
0 replies
17h24m

Weather models are routinely run at resolutions as fine as 1-3 km - fine enough that we do not parameterize things like convection and allow the model to resolve these motions on its native grid. We typically do this over limited areas (e.g. domain the size of a continent), but plenty of groups have such simulations globally. It's just not practical (cost for compute and resulting data) to do this regularly, and it offers little by way of direct improvement in forecast quality.

Furthermore, we don't have to necessarily measure the whole atmosphere in 3D; physical constraints arising from Navier-Stokes still apply, and we use them in conjunction with the data we _do_ have to estimate a full 3D atmospheric state complete with uncertainties.

kridsdale3
0 replies
21h49m

No need, they're a software engineer (presumably). That just means they're better than everyone.

Difwif
1 replies
23h11m

I'm not sure why you're emphasizing that weather forecasting is just 2D fields. Even in the article they mention GraphCast predicts multiple data points at each global location across a variety of altitudes. All existing global computational forecast models work the same way. They're all 3d spherical coordinate systems.

amluto
0 replies
17h47m

See page three, table 1 of the paper. The model has 48 2D fields, on a grid, where the grid is a spherical thing wrapped around the surface of the Earth.

There is not what I would call a 3D spherical coordinate system. There’s no field f defined as f(theta, phi, r) — ther are 48 fields that are functions of theta and phi.

Gys
5 replies
1d

I live in an area which regularly has a climate differently then forecasted: often less rain and more sunny. Would be great if I can connect my local weather station (and/or its history) to some model and have more accurate forecasts.

dist-epoch
2 replies
21h23m

There are models which take as input both global forecasts and local ones, and which then can transpose a global forecast into a local one.

National weather institutions sometimes do this, since they don't have the resources to run a massive supercomputer model.

Gys
1 replies
20h47m

Interesting. So what I am looking for is probably an even more scaled down version? Or something that runs in the cloud with an api to upload my local measurements.

supdudesupdude
0 replies
19h20m

Hate to break it but one weather station wont improve a forecast? What are they supposed to do? Ignore the output of our state of the art forecast models and add an if statement for your specific weather station??

tash9
0 replies
23h50m

One piece of context to note here is that models like ECMWF are used by forecasters as a tool to make predictions - they aren't taken as gospel, just another input.

The global models tend to consistently miss in places that have local weather "quirks" - which is why local forecasters tend to do better than, say, accuweather, where it just posts what the models say.

Local forecasters might have learned over time that, in early Autumn, the models tend to overpredict rain, and so when they give their forecasts, they'll tweak the predictions based on the model tendencies.

speps
0 replies
23h59m

Because weather data is interpolated between multiple stations, you wouldn't even need the local station position, your own position would be more accurate as it'd take a lot more parameters into account.

stabbles
4 replies
20h45m

If you live in a country where local, short-term rain / shower forecast is essential (like [1] [2]), it's funny to see how incredibly bad radar forecast is.

There are really convenient apps that show an animated map with radar data of rain, historical data + prediction (typically).

The prediction is always completely bonkers.

You can eyeball it better.

No wonder "AI" can improve that. Even linear extrapolation is better.

Yes, local rain prediction is a different thing from global forecasting.

[1]https://www.buienradar.nl[2]https://www.meteoschweiz.admin.ch/service-und-publikationen/...

bberenberg
1 replies
19h33m

Interesting that you say this. I spent in month in AMS 7-8 years ago and buienradar was accurate down to the minute when I used it. Has something changed?

bobviolier
0 replies
10h9m

I don't know how or why, but yes, it has become less accurate over at least the last year or so.

supdudesupdude
0 replies
19h22m

Funny to mention. None of the AI forecasts can actually predict precip. None of them mention this and i assume everyone thinks this means the rain forecasts are better. Nope just temperature and humidity and wind. Important but come on, it's a bunch of shite

je42
0 replies
10h18m

However, tools like buienrader seem to have trouble in the recent months/years to accurately predict local weather.

miserableuse
3 replies
21h13m

Does anybody know if its possible to initialize the model using GFS initial conditions used for the GFS HRES model? If so, where can I find this file and how can I use it? Any help would be greatly appreciated!

counters
2 replies
20h58m

You can try, but other models in this class have struggled when initialized using model states pulled from other analysis systems.

ECMWF publishes a tool that can help bootstrap simple inference runs with different AI models [1] (they have plugins for several). You could write a tool that re-maps a GDAS analysis to "look like" ERA-5 or IFS analysis, and then try feeding it into GraphCast. But YMMV if the integration is stable or not - models like PanguWx do not work off-the-shelf with this approach.

[1]:https://github.com/ecmwf-lab/ai-models

miserableuse
1 replies
20h22m

Thank you for your response. Are these ML models initialized by gridded initial conditions measurements (such as the GDAS pointed out) or by NWP model forecast results (such as hour-zero forecast from the GFS)? Or are those one and the same?

counters
0 replies
20h13m

They're more-or-less the same thing.

max_
3 replies
21h17m

What's the difference between a "Graph Neural Network" and a deep neural network?

dil8
2 replies
20h43m

Graph neural networks are deep learning models that trained on graph data.

RandomWorker
1 replies
18h44m

Do you have any resources where I could learn more about these networks?

EricLeer
0 replies
6h30m

See for instance the pytorch geometric [1] package, which is the main implementation in pytorch. They also link to some papers there that might explain you more.

[1]https://pytorch-geometric.readthedocs.io/en/latest/

jauntywundrkind
3 replies
23h13m

From what I can tell from reading & based offhttps://colab.research.google.com/github/deepmind/graphcast/..., one needs access to ECMWF Era5 or HRES data-sets or something similar to be able to run and use this model.

Unknown what licensing options ECMWF offers for Era5, but to use this model in any live fashion, I think one is probably going to need a small fortune. Maybe some other dataset can be adapted (likely at great pain)...

sunshinesnacks
1 replies
21h56m

ERA5 is free. The API is a bit slow.

I think that only some variables from the HRES are free, but not 100% sure.

hokkos
0 replies
17h39m

The API is unusably slow, the only way is to use the AWS, GCP or Azure mirrors, but they miss a lot of variables and are updated sparingly or with a delay.

_visgean
0 replies
15h54m

You can get some of the historical data also from here:https://cloud.google.com/storage/docs/public-datasets/era5(if the official API is too slow. )

To use the data in live fashion I think you would need to get license from ECMWF...

haolez
3 replies
21h14m

Are there any experts around that can chime in on the possible impacts of this technology if widely adopted?

supdudesupdude
1 replies
19h18m

It doesnt predict rainfall so i doubt most of us will actually care about it until then. Still it depends on input data (the current state of weather etc). How are we supposed to accurately model the weather at every point in the world? Especially when tech bro Joe living in San Fran expects things to be accurate to a meter within his doorstep

counters
0 replies
17h28m

GraphCast does predict rainfall - seehttps://charts.ecmwf.int/products/graphcast_medium-rain-acc?...for example.

_visgean
0 replies
15h59m

It will get adopted, eventually we will have more accurate weather forecasts. Thats good for anything that depends on weather - e.g. energy consumption and production, transportation costs...

crazygringo
3 replies
17h18m

Making progress on weather forecasting is amazing, and it's been interesting to see the big tech companies get into this space.

Apple moved from using The Weather Channel to their own forecasting a year ago [1].

Using AI to produce better weather forecasts isexactlythe kind of thing that is right up Google's alley -- I'm very happy to see this, and can't wait for this to get built into our weather apps.

[1]https://en.wikipedia.org/wiki/Weather_(Apple)

_visgean
1 replies
16h11m

Apple moved from using The Weather Channel to their own forecasting a year ago [1].

AFAIK they don't have their own forecasting models, they use same data sources as everyone else:https://support.apple.com/en-us/HT211777

crazygringo
0 replies
13h45m

Your linked article says they use their own, if you're on a version later than iOS 15.2.

blacksmith_tb
0 replies
16h58m

Well, Apple acquired Dark Sky and then shut it down for Android users[1], and then eventually for iOS users as well (but rolled it into the built in weather app, I think).

1:https://www.theverge.com/2020/3/31/21201666/apple-acquires-w...

syntaxing
2 replies
22h22m

Maybe I missed it but does anyone know what it will take to run this model? Seems something fun to try out but not sure if 24GB of VRAM is suffice.

kridsdale3
1 replies
21h52m

It says in the article that it runs on Google's tensor units. So, go down to your nearest Google data center, dodge security, and grab one. Then escape the cops.

azeirah
0 replies
21h36m

You could also just buy a very large amount of their coral consumer TPUs :D

layoric
2 replies
18h29m

I can't see any citation to accuracy comparisons, or maybe I just missed them? Given the amount of data, and complexity of the domain, it would be good to see a much more detailed breakdown of their performance vs other models.

My experience in this space is that I was first employee at Solcast building a live 'nowcast' system for 4+ years (left ~2021) targeting solar radiation and cloud opacity initially, but expanding into all aspects of weather, focusing on the use of the newer generation of satellites, but also heavily using NWP models like ECMWF. Last I knew,nowcasts were made in minutes on a decent size cluster of systems, and has been shown in various studies and comparisons to produce extremely accurate data (This article claims 'the best' without links which is weird..), be interesting on how many TPUsv4 were used to produce these forecasts and how quickly? Solcast used ML as a part of their systems, but when it comes down to it, there is a lot more operationally to producing accurate and reliable forecasts, eg it would be arrogant to say the least to switch from something like ECMWF to this black box anytime soon.

Something I said as just before I left Solcast was that their biggest competition would come from Amazon/Google/Microsoft and not other incumbent weather companies. They have some really smart modelers, but its hard to compete with big tech resources. I believe Amazon has been acquiring power usage IoT related companies over the past few years, I can see AI heavily moving into that space as well.. for better or worse.

shmageggy
0 replies
17h44m

I think the paper has what you are looking for. Several figures comparing performance to HRES, and "GraphCast... took roughly four weeks on 32 Cloud TPU v4 devices using batch parallelism. See supplementary materials section 4 for further training details."

alxmrs
0 replies
11h43m

I’m so happy you asked about this! Check outhttps://sites.research.google/weatherbench/

knicholes
2 replies
19h9m

What are the similarities between weather forecasting and financial market forecasting?

sonya-ai
0 replies
19h4m

Well it's a start, but weather forecasting is far more predictable imo

KRAKRISMOTT
0 replies
19h6m

Both are complex systems traditionally modeled with differential equations and statistics.

freedomben
2 replies
23h56m

weather prediction seems to me like a terrific use of machine learning aka statistics. The challenge I suppose is in the data. To get perfect predictions you'd need to have a mapping of what conditions were like 6 hours, 12 hours, etc before, and what the various outcomes were, which butterflies flapped their wings and where (this last one is a joke about how hard this data would be). Hard but not impossible. Maybe impossible. I know very little about weather data though. Is there already such a format?

tash9
1 replies
23h44m

It's been a while since I was a grad student but I think the raw station/radiosonde data is interpolated into a grid format before it's put into the standard models.

kridsdale3
0 replies
21h50m

This was also in the article. It splits the sphere surface in to 1M grids (not actually grids in the cartesian sense of a plane, these are radial units). Then there's 37 altitude layers.

So there's radial-coordinate voxels that represent a low resolution of the physical state of the entire atmosphere.

carabiner
2 replies
21h41m

GraphCast makes forecasts at the high resolution of 0.25 degrees longitude/latitude (28km x 28km at the equator).

Any way to run this at even higher resolution, like 1 km? Could this resolve terrain forced effects like lenticular clouds on mountain tops?

dist-epoch
1 replies
21h18m

One big problem is input weather data. It's resolution is poor.

carabiner
0 replies
21h11m

Yeah, not to mention trying to validate results. Unless we grid install weather stations every 200 m on a mountain top...

Vagantem
2 replies
19h13m

Related to this, I built a service that shows what day it has rained the least on in the last 10 years - for any location and month! Perfect to find your perfect wedding date. Feel free to check out :)

https://dropory.com

helloplanets
1 replies
10h7m

Was interested to check this out for Helsinki, but site loads blank on Safari :(

Vagantem
0 replies
54m

Oh, yea spotted now - I’ll have a look as soon as I’m at my computer, will fix. Until then, I think you’ll have to use it on a desktop - thanks for spotting!

whoislewys_1
1 replies
14h13m

Predicting weather and stock prices don't seem too far apart.

Is it inevitable that all market alpha gets mined by AI?

HereBePandas
0 replies
4h57m

I'd be shocked - given the incentives - if it hasn't already happened to a great extent. Many of the types of people Google DeepMind hires are also the types of people hedge funds hire.

user_7832
1 replies
23h14m

(If someone with knowledge or experience can chime in, please feel free.)

To the best of my knowledge, poor weather (especially wind shear/microbursts) are one of the most dangerous things possible in aviation. Is there any chance, or plans, to implement this in the current weather radars in planes?

tash9
0 replies
22h44m

If you're talking about small scale phenomena (less than 1km), then this wouldn't help other than to be able to signal when the conditions are such that these phenomena are more likely to happen.

joegibbs
1 replies
15h45m

When will we have enough data that we will be able to apply this to everything? Imagine a model that can predict all kinds of trends - what new consumer good will be the most likely to succeed, where the next war is most likely to break out, who will win the next election, which stocks are going to break out. One gigantic black box with a massive state, with input from everything - planning approvals, social media posts, solar activity, air travel numbers, seismic readings, TV feeds.

drakenot
0 replies
11h2m

Sounds a bit like the premise for the Asimov series, "The Foundation"

cryptoz
1 replies
23h23m

Again haha! Still no mention of using barometers in phones. Maybe some day.

EricLeer
0 replies
6h27m

The weather company claims to do this (they are also the main provider of weather data for apple).

comment_ran
1 replies
21h48m

So for a daily user, to make it a practical usage, let's say if I have a local measurement of X, I can predict, let's say, 10 days later, or even just tomorrow, or the day after tomorrow, let's say the wind direction, is it possible to do that?

If it is possible, then I will try using the sensor to measure my velocity at some place where I live, and I can run the model and see how the results look like. I don't know if it's going to accurately predict the future or within a 10% error bar range.

dist-epoch
0 replies
21h28m

No, this model uses as input the current state of the weather across the whole planet.

EricLeer
1 replies
6h34m

I am in the power forecasting domain, where weather forecasts are one of the most important inputs. What I find surprising is that with all the papers and publications from google in the past years, there seems to be no way to get access to these forecasts! We've now evaluated numerous of the ai weather forecasting startups that are popping up everywhere and so far for all of them their claims fall flat on their face when you actually start comparing their quality in a production setting next to the HRES model from ECMWF.

scellus
0 replies
1h1m

GraphCast, Pangu-Weather from Huawei, FourCastNet and EC's own AIFS are available on the ECMWF chart websitehttps://charts.ecmwf.int, click "Machine learning models" on the left tab. (Clicking anything makes the URL very long.)

Some of these forecasts are also downloadable as data, but I don't know whether GraphCast is. Alternatively, if forecasts have a big economic value to you, loading latest ERA5 and the model code, and running it yourself should be relatively trivial? (I'm no expert on this, but I think that is ECMWF's aim, to distribute some of the models and initial states as easily runnable.)

supdudesupdude
0 replies
19h16m

I'll be impressed when it can predict rainfall better than GFS / HRRR / EURO etc

simonebrunozzi
0 replies
21h3m

Amazing. Is there an easy way to run this on a local laptop?

sagarpatil
0 replies
22h55m

How does one go about hosting this and using this as an API?

rottc0dd
0 replies
8h42m

How long does this forecasting hold, given butterfly effect et al?

max_
0 replies
21h29m

I have far more respect for the AI team at DeepMind even thou they may be less popular than say OpenAI or "Grok".

Why? Other AI studios seem to work on gimmicks while DeepMind seems to work on genuinely useful AI applications [0].

Thanks for the good work!

[0] Not to say that Chat GPT & Midjourney are not useful, I just find DeepMinds quality of research more interesting.

max_
0 replies
21h27m

Has anyone here heard of "Numerical Forecasting" models for weather? I heard they "work so well".

Does GraphCast come close to them?

hammad93
0 replies
15h20m

I think it's irresponsible to call first on this because it will hinder scientific collaboration. I appreciate this contribution but the journalism was sloppy.

dnlkwk
0 replies
20h49m

Curious how this factors in long-range shifts or patterns eg el nino. Most accurate is a bold claim

csours
0 replies
18h38m

Makes me wonder how much it would take to do this for a city at something like 100 meter resolution.