Figure 3 on p.40 of the paper seems to show that their LLM based model does not statistically significantly outperform a 3 layer neural network using 59 variables from 1989.
This figure compares the prediction performance of GPT and quantitative models based on machine learning. Stepwise Logistic follows Ou and Penman (1989)’s structure with their 59 financial predictors. ANN is a three-layer artificial neural network model using the same set of variables as in Ou and Penman (1989). GPT (with CoT) provides the model with financial statement information and detailed chain-of-thought prompts. We report average accuracy (the percentage of correct predictions out of total predictions) for each method (left) and F1 score (right). We obtain bootstrapped standard errors by randomly sampling 1,000 observations 1,000 times and include 95% confidence intervals.
Not to mention, as somebody who works in quant trading doing ml all day on this kind of data. That ann benchmark is nowhere near state of the art.
People didn't stop working on this in 1989 - they realised they can make lots of money doing it and do it privately.
Do you use llama 3 for your work?
No hedge fund registered before the last 2 weeks will use Llama3 for their "prod work" beyond "experiments".
Quant trading is about "going fast" or "being super right", so either you'd need to be sitting on some huge llama.cpp/transformer improvement (possible but unlikely) or its more likely just some boring math applied faster than others.
Even if they are using a "LLM", they wont tell you or even hint at it - "efficient market" n all that.
Remember all quants need to be "the smartest in the world" or their whole industry falls apart, wait till you find out its all "high school math" based on algo's largely derived 30/40 years ago (okay not as true for "quants" but most "trading" isn't as complex as they'd like you/us to believe).
I know nothing about this world, but with things like "doctor rediscovers integration" I can't help but wonder if it's not deception but ignorance - that they think it really is where math complexity tops out at.
They hire people who know that maths doesn't "top out here", so they can point to them and say "look at that mathematicians/physicists/engineers/PHD's we employ - your $20Bn is safe here". Hedge funds aren't run by idiots, just a different kind of "smart" to an engineer.
The engineers are are incredibly smart people, and so the bots are "incredibly smart" but "finance" is criticised by "true academics" because finance is where brains go to die.
To use popular science "the three body problem" is much harder than "arb trade $10M profitably for a nice life in NYC", you just get paid less for solving the former.
It is just a different (applied) discipline.
It's like math v engineering - you can come up with some beautiful pde theory to describe this column in a building will bend under dynamic load and use it to figure out exactly the proportions.
But engineering is about figuring out "just make its ratio of width to height greater than x"
Because the goal is different - it's not about coming up with the most pleasing description or finding the most accurate model of something. It's about making stuff in the real world in a practical, reliable way.
The three body problem is also harder than running experiments in the LHC or analysing Hubble data or treating sick kids or building roads or running a business.
Anybody who says that finance is where brains go to die might do well to look in the mirror at their own brain. There are difficult challenges for smart people in basically every industry - anybody suggesting that people not working in academia are in some way stupider should probably reconsider the quality of their own brain.
There are many many reasons to dislike finance. That it is somehow pedestrian or for the less clever people is not true. Nobody who espouses the points you've made has ever put their money where there mouth is. Why not start a firm, making a billion dollars a year because you're so smart and fund fusion research with it? Because it's obviously way more difficult than they make out.
Not that it's particularly relevant to this discussion but the three body problem is easy. You can solve it numerically on a laptop with insane precision (much more precisely than would be useful for anything) or also write down an analytic solution (which is ugly and useless because it converge s extremely slowly, but still. See wikipedia.org/wiki/Three-body_problem).
From your link:
This seems like the opposite of your claim.
The crucial parts of that are "closed-form" and "standard". The analytic solution is "non-standard" because it involves the kind of power series that nobody knows or cares about (because they are only about 100 years old and have no real useful applications in engineering).
A similar claim is that roots of polynomials of degree 5 (and over) have no "general closed form solution" (with, as usual, the implicit qualification: "in terms of functions I'm currently comfortable with because I've seen them a lot"). That doesn't mean it's a difficult problem.
The two problems have in common that they are significantly harder than their smaller versions (two bodies, or degree 4). Historically, people spent a lot of time trying to find solutions for the larger problems in terms of the same functions that can be used to solve the smaller problems (conic sections, radicals). That turned out to not be possible. This is the historical origin of the meme "three body problem is unsolvable".
Ill probably go look this up, but do you mean functions of a higher type than normal powers like eg. Tetration, or something more complicated (am I even on the right track?)
I mean functions defined by power series (just like sin(x) is defined in analysis courses). For the three body problem, see http://oro.open.ac.uk/22440/2/Sundman_final.pdf (Warning, pdf!). This is what Wikipedia cites when talking about the solution to the three body problem. The document gives a lout of historical context.
For polynomial roots, see wikipedia.org/wiki/Elliptic_function.
You misunderstand the quote. It’s where brains go to die from a societal perspective. It might be stimulating and difficult for the individual but it’s useless to science.
Many advancements in computer science have come from the finance world.
e.g. LMAX Disruptor was a pretty impressive concurrency library a decade ago:
https://lmax-exchange.github.io/disruptor/
Claiming that being smart isn't required for trading is not the same as claiming that people doing trading aren't smart.
(Note that I personally have no opinion on this topic, as I'm not sufficiently informed to have one.)
I was specifically addressing the "being smart isn't necessary for trading".
The op is making some implication across numerous posts that it's all basically a big con and it's all very simple.
It is like claiming you don't need to be rocket scientist to go to the moon because they just use metal and screws.
The individual parts might be simple in isolation. But it is the complexity of conducting large scale, large scope research in an environment that gives you limited feedback and will adapt to your own behaviour changes that is where the smarts are needed.
OP seems to not understand the inherent difficult of doing any research.
Almost anybody could be taught to make a simple circuit and battery from some basic raw materials. The fact it is simple and easy now we know the answer does not mean it was simple or easy to discover. Some of the greatest minds dedicated their entire lives to discovering things that now most 10 years olds understand. That doesn't imply you only need to have the intellect of a 10 year old to make fundamental breakthroughs in science.
Working in quant trading is almost pure research - and so it requires a certain level of intellect - probably at least the intellect required to pursue a quantitative PhD successfully (not that they need the PhD but they need the capacity to be able to do one).
My interpretation of "finance is where brains go to die" is more along the lines of finance being less good for society at large compared to pure science. Like if someone invents something new and useful in a lab for their phd, then they go find a job in finance. The brain died because it was onto something and then abandoned it for being a cog in the machine.
Drs rediscover integration is about people stepping far outside their field of expertise.
It is neither deception or ignorance.
It's the same reason some of the best physics students get PhD studentships where they are basically doing linear regression on some data.
Being very good at most disciplines is about having the fundamentals absolutely nailed.
In chess for example, you will probably need to get to a reasonably high level before you will be sure to see players not making obvious blunders.
Why do tech firms want developers who can write bubble sort backward in assembly when they'll never do anything that fundamental in their career? Because to get to that level you have to (usually) build solid mastery of the stuff you will use.
Trading is truly a complex endeavour - anybody who says it isn't has never tried to do it from scratch.
Id say the industry average for somebody moving to a new firm and trying to replicate what they did at their old firm is about 5%.
Im not sure what you'd call a problem where somebody has seen an existing solution, worked for years on it and in the general domain, and still would only have a 5% chance of reproducing that solution.
Because 95% of experienced candidates in trading were fired or are trying to scam their next employer.
“Oh, yeah, my <insert HFT pipeline or statarb model> can do sharpe <random int 1 to 10> for <random int 10 to 100> million pnl per year. Trust me bro”. Fucking annoying
Obviously not true. The deals for most of these set ups are team founders/pms are paid mostly by profit share. So the only scam is scamming yourself into a low salary position for a couple years till they fire you.
Orders of magnitude more leave their jobs of their choosing than are fired.
These PMs are not the ones job hopping every year.
And 95% of interview candidates are not PMs.
200k-300k USD salary is not low.
And 1 year garden leave / non compete? That’s literally 0.5M over 2 years for doing jack shit.
This is very appealing for tech SWEs or MBA product managers who are all talk and no walk.
But even with profit share / pnl cut, many firms pay you a salary, even before you turn a profit. It eventually gets deducted when you turn a profit.
Hedge fund, maybe. Prop trading, no.
write bubble sort backward in assembly
you mean backporting a high-level implementation to assembly? Or is writing code "backward" some crazy challenge interviewees have to do now?
Spell the assembly backwards out loud with no prior notes while juggling knives (shows boldness in the way you approach problems!) and standing on a gymnastics ball (shows flexibility and well-roundedness)...
To extend the chess analogy, having the fundamentals absolutely nailed is critical at even a mid-level, because the payoff/effort ratio in avoiding blunders/mistakes is much higher than innovating or being creative.
The process of getting to a higher level involves rote learning of common tactics so you can instantly recognize opportunities, and then eventually learning deep into "opening theory" which is memorizing 10 starting moves + their replies because people much better than you have written lengthy books on the long-term ramifications of making certain moves. You're learning a vast repertoire of "existing solutions" so you can reproduce them on-demand, because those solutions are battle-tested to not have weaknesses.
Chess is a game where the amount you have to lose by being wrong is much higher than what you gain by being right. Fields where this is the case want to ensure to a greater extent that people focus on the fundamentals before they start coming up with new ideas.
How is it not ignorance of math?
Please cite your references, lest you run afoul of the lulgodz:
https://diabetesjournals.org/care/article/17/2/152/17985/A-M...
It’s impressive how incorrect so much of this information is. High frequency trading is about going fast. There is a huge mid and low freq quant industry. Also most quant strategies are absolutely not about being “super right”…that would be the province of concentrated discretionary strategies. Quant is almost always about being slightly more right than wrong but at large scale.
What algos are you referring to derived 30 or 40 years ago? Do you understand the decay for a typical strategy? None of this makes any sense.
Quantitative trading is simply the act of trading on data, fast or slowly, but I'll grant you for the more sophisticated audience there is a nuance between "HFT" and "Quant" trading.
To be "super right" you just have to make money over a timeline, you set, according to your own models. If I choose a 5 year timeline for a portfolio, I just have to show my portfolio outperforming "your preferred index here" over that timeline - simple (kind of, I ignore other metrics than "make me money" here).
Depending on what your trading will depend on which algo's you will use, the way to calculate the price of an Option/Derivative hasn't changed in my understanding for 20/30 years - how fast you can calculate, forecast, and trade on that information has.
My statement wont hold true in a conversation with an "investing legend", but to the audiance who asks "do you use llama3" its clearly an appropriate response.
That’s not true. It is true that the black scholes model was found in the 70s but since then you have
- stochastic vol models
- jump diffusion
-local vol or Dupire models
- levy process
- binomial pricing models
all came well After the initial model was derived.
Also a lot of work in how to calculate vols or prices far faster has happened.
The industry has definitely changed a lot in the past 20 years.
Very few of the fancy models are actually used. Dupire's non parametric model has been the industrial work horse for a long time. Heston like SV's and Jump diffusions promised a lot and did not work in practice (calibration, stability issues). Some form of local stochastic models get used for certain products. In general, it is safe to say that Black-Scholes and its deterministic extension local vol have held up well.
Not only that, but Dupire’s local vol, stochastic vol (Heston in rates, or on the equity side models that combine local vol with a stoch vol component to calibrate to implied vols perfectly) and jump diffusion were basically in production 15 years ago.
Since the GFC it’s not about crazy new products (on derivatives desks), but it’s about getting discounting/funding rates precisely right (depending on counterparty, collateral and netting agreements, onshore/offshore, etc), and about compliance and reporting.
I don't really understand your viewpoint - I assume you don't actually work in trading?
Aside from the "theoretical" developments the other comment mentioned, your implication that there is some fixed truth is not reflected in my career.
Anybody who has even a passing familiarity with doing quant research would understand that black scholes and it's descendants are very basic results about basic assumptions. It says if the price is certain types of random walk and also crucially a martingale and Markov - then there is a closed form answer.
First and foremost black scholes is inconsistent with the market it tries to describe (vol smiles anyone??), so anybody claiming it's how you should price options has never been anywhere near trading options in a way that doesn't shit money away.
In reality the assumptions don't hold - log returns aren't gaussian, the process is almost certainly neither Markov or martingale.
The guys doing the very best option pricing are building empirical (so not theoretical) models that adjust for all sorts stuff like temporary correlations that appear between assets, dynamics of how different instruments move together, autocorrelation in market behaviour spikes and patterns of irregular events and hundreds of other things .
I don't know of any firm anywhere that is trading profitably at scale and is using 20 year old or even purely theoretical models.
The entire industry moved away from the theory driven approach about 20 years ago for the simple reason that is inferior in every way to the data driven approach that now dominates
There's no way this person works as a quant. Almost every statement they've made is wrong...
Not true. Most of the magic happens in estimating the volatility surface, BSM's magic variable. But I've also seen interesting work in expanding the rates components. All this before we get into the drift functions.
While the industry has changed substantially since the GFC, all foundational derivatives models were basically in place back then.
How you can calculate fast, forecast, and trade on that information has
There. Fixed it for you. ;)
Algo trading is certainly about speed too though, but it's not HFT which is literally only a out speed and scalping spreads. It's about the speed of recognizing trends and reacting too them before everyone else realizes the same trend and thus altering the trend.
It's a lot like quantum mechanics or whatever it is that makes the observation of a photon changes. Except with the caveat that the first to recognize the trend can direct it's change (for profit).
Leveraging "hidden" risk/reward asymmetries is another avenue completely that applies to both quant/HFT, adding a dimension that turns this into a pretty complex spectrum with plenty of opportunities.
The old joke of two economists ignoring a possible $100 bill on the sidewalk is an ironic adage. There are hundreds of bills on the sidewalk, the real problem is prioritizing which bills to pick up before the 50mph steamroller blindsides those courageous enough to dare play.
Well I work in prop trading and have only ever worked for prop firms- our firm trades it's own capital and distributes it to the owners and us under profit share agreements - so we have no incentive to sell ourselves as any smarter than the reality.
Saying it's all high school math is a bit of a loaded phrase. "High school math" incorporates basically all practical computer science and machine learning and statistics.
If I suspect you could probably build a particle accelerator without using more math than a bit of calculus - that doesn't make it easy or simple to build one.
Very few people I've worked with have ever said they are doing cutting edge math - it's more like scientific research . The space of ideas is huge, and the ways to ruin yourself innumerable. It's more about people who have a scientific mindset who can make progress in a very high noise and adaptive environment.
It's probably more about avoiding blunders than it is having some genius paradigm shifting idea.
Would you ever go off on your own to trade solo or is that something that just does not work without a ton (like 9 figures) of capital and a pretty large team?
Going solo in trading is a very different beast compared to trading at a prop firm. Yes, capital is a significant factor. The more you have, the more you can diversify and absorb losses which are inevitable in trading. However, it's not just about the capital. The infrastructure, data access, and risk management systems at a prop firm are usually far superior to what you could afford or build on your own as an individual trader.
Moreover, the collaborative environment at a prop firm can't be understated. Ideas and strategies are continuously debated, tested, and refined. This collective brainpower often leads to more robust strategies than what you might come up with on your own.
That said, there are successful solo traders, but they often specialize in niche markets where they can leverage unique insights or strategies that aren't as capital intensive. It's definitely not for everyone and comes with its own set of challenges and risks.
It's like any other business, there are factors of production that various actors will have varying access to, at varying costs.
A car designer still needs a car factory of some sort, and there's a negotiation there about how the winnings are divided.
In the trading world there are a variety of strategies. Something very infra dependent is not going to be easy to move to a new shop. But there are shops that will do a deal with you depending on what knowledge you are bringing, what infra they have, what your funding needs are, what data you need, and so on.
I too believe this is key towards successful trading. Put in other words, even with an exceptionally successful algorithm, you still need a really good system for managing capital.
In this line of business, your capital is the raw material. You cannot operate without money. A highly leveraged setup can get completely wiped out during massive swings - triggering margin calls and automatic liquidation of positions at the worst possible price (maximizing your loss). Just ask ex-billionaire investor/trader Bill Hwang[1].
1. https://www.bloomberg.com/news/features/2021-04-08/how-bill-...
Im responding to the comment "do use llama3" not "breakdown your start"
This statement is largely true of any "edge research", as I watch the loss totals flow by on my 3rd monitor I can think of 30 different avenues of exploration (of which none are related to finance).
Trading is largely high school Math, on top of very complex code, infrastructure, and optimizations.
Do you work for rentech?
The math might not be complicated for a lot of market making stuff but the technical aspects are still very complicated.
llama3 is all high school math too.
Is there any learning resources that you know of?
Going fast means scalping?
Mind elaborating?
Speaking for myself and likely others with similar motivations, yes we can "figure it out" and publish something to show our work and expand the field of endeavor with our findings - OR - we can figure something profitable out on our own and use our own funds to trade our strategies with our own accounts.
Anyone who has figured out something relatively profitable isn't telling anyone how they did it.
Corollary: someone who is selling you tools or strategies on how to make tons and tons of money, is probably not making tons and tons of money employing said tools and strategies, but instead making their money by having you buy their advice.
I think I could probably make more money selling a tool or strategy that consistently, reliably makes ~2% more than government bonds than I could make off it myself, with my current capital.
Seems like the money here would be building a shiny, public facing version of the tool behind a robust paywall and build a relationship with a few Broker Dealer firms who can make this product available to the Financial Advisors in their network.
If you were running this yourself with $1M input capital, that'd be $20k/year per 1M of input - so $20K is a nice number to try and beat selling a product that promulgates a strategy.
But you're going to run into the question from people using the product: "Yeah - but HOW DOES IT WORK??!!!" and once you tell them does your ability to get paid disappear? Do they simply re-package your strategy as their own and cease to pay you (and worse start charging for your work)? Is your strategy so complicated that the value of the tool itself doing the heavy lifting makes it sticky?
Getting people to put their money into some Black Box kind of strategy would probably be challenging - but Ive never tried it - it may be easier than giving away free beer for all I know. Sounds like a fun MVP effort really. Give it a try - who knows what might happen.
As fas as I know the more people use the strategy the worse it performs, the market is not static, it adapts. Other people react to the buy/sell of your strategy and try to exploit the new pattern.
This is an interesting observation in combination with the popular pension strategy to continually buy index funds regardless of performance.
The average return from index funds is the benchmark that all those others are trying to beat but all the competitors trying to beat the average have a tendency to push successful strategies towards the average.
Lol try it and get back to us.
Well, see, I don't actually have a method for that. But if I did, I think my capital is low enough that I'd have more success selling it to other people than trying to exploit it myself, since the benefit would be pretty minimal if I did it with just my own savings, but could be pretty dramatic for, say, banks.
Strats tend to have limits. What works for you may fall apart with large amounts of capital. Don't discount compound interest. $10,000 compounding 30% over 20 years is 2 million without any additional capital.
Or just sell it to exactly one buyer with a lot of capital to invest.
That hypothetical person or organization already has an advisor in charge of their money at the smaller end or an entire private RIA on the Family Office side of things. This approach is a fools errand.
You can't do it because there are lots of fraudulent operators in the space. Think about it: someone comes up to you offering a way to give you risk-free return. All your ponzi flags go up. It's a market for lemons. If you had this, the only way to make it is to raise money some other way then redirect it (illegal but you'll get away with it most likely), or to slowly work your way up the ranks proving yourself till you get to a PM and then have him work your strat for you.
The fact that you can't reveal how means you can't prove you're not Ponzi. If you reveal how, they don't need you.
If you can prove it works you won't have any difficulty raising capital.
Absolutely correct - and more over - when you do sit someone down (in my case, someone with a "superior education" in finance compared to my CS degree) and explain things to them, they simply don't understand it at all and assume you're crazy because you're not doing what they were taught in Biz School.
Why not then publish the strategies once outmoded, or are they in fact published? Can I go see somewhere what strategies big funds used in the 90s to make bank, which presumably no longer offer a competitive advantage? The way I can go see what computer exploits/hacks used to work when they were still secret?
Maybe it's just what I know, but I can't help but think the "strategies" are a lot like security exploits--some cleverness, some technical facility, but mainly the result of staring at the system for a really long time and stumbling on things.
Because then your competition knows which strategies don't work, and also what types of strategies you work on.
Don't leak information.
Why not? Because you won't know what of your strategies is outmoded by something new because that group is not publishing their strategy, which is like yours but on steroids, either.
And then everything regresses to the Dark Forest game theory.
Wouldn't publishing also influence the performance itself because it would also make an impact on the data? And if you'd calculate that in and the method is spreading, wouldn't that in turn have to be calculated in also, which would lead to a spiral?
I am assuming, he/she minds a lot.
Seems simple: Why share your effective strategies in an industry full of competition and those striving to gain a competitive edge?
At a SciPy meeting where someone in finance was presenting an intro on some tools, someone asked if they ever contribute code to those open source projects. Their answer was "Yes, but only after we've stopped making money with them."
I never traded consistently and successfully but I did do a startup with a seasoned quant trader with the ambition of using bigger models to generate novel alpha. We mopped the floor with the academics who publish but that is whiffle ball compared to a real prop outfit that lasts.
Not having made it big myself I obviously don’t know the meta these days, but last I had any inside baseball, the non-stationarity and friction just kill you on trying to get fancy as opposed to just nailing it on the fundamentals.
Extreme execution quality is a game, people make money in both traditional liquidity provision and agency execution by being fast as hell and managing risk well.
Individual signals that are individually somewhat mundane but composed well via straightforward linear-ish regressions is a game: people get (ever decaying) alpha out of bright ideas (and rotate new signals in).
And I’m sure that LLMs have started playing a role, there’s a legitimate capability increase in spite of the dubious production-worthiness.
But as a blind wager, I bet prop trading is about what it was 5 years ago on better gear: elite execution (no pun intended) on known-good ways to generate alpha.
Was going to point out the same. Glad to have the paper to read but I don't think the findings are significant.
I agree this isn't earth shattering, but I think the benefit here is that it's a general solution instead of one trained on financial statements specifically.
That is not a benefit. If you use a tool like this to try to compete with sophisticated actors (e.g. all major firms in the capital markets space) you will lose every time.
We come up with all sorts of things that are initially a step backwards, but that lead to eventual improvement. The first cars were slower than horses.
That's not to suggest that Renaissance is going to start using Chat GPT tomorrow, but maybe in a few years they'll be using fine tuned versions of LLMs in addition to whatever they're doing today.
Even if it's not going to compete with the state of the art models for something, a single model capable of many things is still useful, and demonstrating domains where they are applicable (if not state of the art) is still beneficial.
Far too much in the way of "maybe in a few years" LLM prediction relies on the unspoken assumption that there will not be any gains in the state of the art in the existing, non-LLM tools.
"In a few years" you'd have the benefit of the current, bespoke tools, plus all the work you've put into improving them in the meantime.
And the LLM would still be behind, unless you believe that at some point in the future, a radically better solution will simply emerge from the model.
That is, the bet is that at some point, magic emerges from the machine that renders all domain-specialist tooling irrelevant, and one or two general AI companies can hoover up all sorts of areas of specialism. And in the meantime, they get all the investment money.
Why is it that we wouldn't trust a generalist over a specialist in any walk of life, but in AI we expect one day to be able to?
Specialists exist because the human generalist can no longer possibly learn and perfect all there is to learn in the world not because the specialist has magic powers the generalist does.
If there were some super generalist that could then the specialist would have no power.
The technocrat thinks that the AI is that generalist and will impose it on you whether you want it or not:
"I didn't violate a red light. I wasn't even driving, the AI was!"
"The AI said you did, that's 50,000 yuan please."
I have a slightly more cynical take: Those LLMs are not actually general models, but niche specialists on correlated text-fragments.
This means human exuberance is riding on the (questionable) idea that a really good text-correlation specialist can effectively impersonate a general AI.
Even worse: Some people assume an exceptional text-specialist model will effectively meta-impersonate a generalist model impersonating a different kind of specialist!
Eloquently put :-)
The specialist is a result of his general intelligence though.
It seems to me that LLMs the metaphorical horse and specialized algorithms are the metaphorical car in this situation. A horse is a an extremely complex biological system that we barely understand and which has evolved many functions over countless iterations, one of which happening to be the ability to run quickly. We can selectively breed horses to try to get them to run faster, but we lack the capability to directly engineer a horse for optimal speed. On the other hand, cars have been engineered from the ground-up for the specific purpose of moving quickly. We can study and understand all of the systems in a car perfectly, so it's easy to develop new technology specialized for making cars go faster.
agreed. most people can't create a custom tailored finance statement model. but many people can write the following sentence: "analyze this financial statement and suggest a market strategy." and if that sentence performs as well as an (albeit old) custom model, and is likely to have compound improvements in its performance over time with no changes to the instruction sentence...
"buy and hold the S&P 500 until you're ready to retire"
That is bad advice.
VGT Vanguard Technology ETF has outperformed S&P 500 over the past 20 years.
All the people who say “VTSAX and chill” disappeared in the past 3-4 years because their cherished total passive index fund is no longer the best over long horizons. And no, the markets are not efficient.
But it can't come up with a particularly imaginative strategy; it can only come up with a mishmash of existing stuff it has seen, equivocate, or hallucinate a strategy that looks clever but might not be.
So it all needs checking. It's the classic LLM situation. If you're trained enough to spot the errors, the analysis wouldn't take you much time in the first place. And if you're not trained enough to spot the errors...
And let's say it does work. It's like automated exchange betting robots. As soon as everyone has access to a robot that can exploit some hidden pattern in the data for a tiny marginal gain, the price changes and the gain collapses.
So if everyone has the same access to the same banal, general analysis tools, you know what's going to happen: the advantage disappears.
All in all, why would there be any benefits from a generalised model?
No
But I bet it uses way more energy.
The infamous 1/N portfolio comparison is missing. 1/N puts to shame many strategies.