It would require Ma Bell. As the article correctly deduces (IMO), it was related to their quasi-monopoly. It was a partial and insecure monopoly; they did need to innovate or they would eventually be eaten. Telecoms was their business, in the very broadest sense. New ways of drawing wire more efficiently would benefit their bottom line. New forms of amplifier technology would almost certainly benefit them ten or twenty years down the line. Same with a new kind of treatment to make telephone poles last longer. So basic metallurgy, forestry, and cutting-edge semiconductor physics were all within the remit. In hindsight it's almost like a public/national research laboratory, that happened to be privately held.
Google already did this. Google X, Google Brain, Deepmind...
The invention of the transformer architecture that underpins modern AI, dozens of core papers in the field.
What they haven't revolutionized is materials science. Memristor technology for example, is still grossly underdeveloped.
Unfortunately that's the Google of a decade or two ago. Today's Google is very focused on cost-cutting and I can't imagine them investing in new future-focused projects like those which are not under pressure to turn a profit any time soon.
The paper on transformers come out in 2017. Less than a decade ago.
7 years is basically rounding error from a decade ago.
https://research.google/pubs/?&
Go look at the breakdown of the number of research papers coming out of Google. In 2023 there were 744, in 2017 there were 637. Two decades ago, there were half as many.
No no, the new research does not have a proven track record of industry success, so it doesn’t count ;)
mumble clayton mumble christensen
mumble business cycles
mumble corporate cycles
mumble evolution
It's the circle of life, companies eventually get captured by accountants and lose their initial spark. Even companies like Microsoft had an era of exciting innovation.
I might argue that Microsoft had two distinct eras of innovation
Yes.
Less and even less.
Just like cough Bell Labs maybe?
Both memristors and 3D XPoint aka Optane are in a patent limbo. The current patent holders tried them, and found the current crop too hard and unprofitable, but further research and commercialization would require either buying the key patents or waiting for them to expire.
Commercialization, yes, but not necessarily further research.
Fundamental research is likely unhindered.
But any research of incremental improvements in an attempt to make the tech commercially viable is stymied.
And I would love to see some accessible and available non-volatile RAM measured in gigabytes. It means computers that never get switched "of, no more than a paper book can be switched off. It would take a radically different approach to operating syssystems to reap the benefits though.
Hard to get research funded when it can’t be commercialised sufficiently to merit the risk.
That one's on Intel. They marketed Optane poorly and created a subpar product for the market they were tackling.
The whole point of memristors is to make something better than DRAM and SRAM. Perhaps even blow up the dichotomy between storage and memory altogether (as memristor MRAM can store information even when powered down, unlike DRAM and SRAM). Let here we are in 2024 with no viable memristor products on the market.
Holy crap! Someone else knows about memristors!
Oh man, coming neuroscience I had no idea why these weren't being more researched. Thanks for the info there.
Just as an FYI to y'all, a memristor is essentially how a synapse works. In that it's an electrical element that counts the change in charges across the leads over time (dCharge/dFlux or dq/dPhi). And all for very little power and it survives power cycles.
Like, we talk about AI these days, but memristor incorporating hardware would really get us there. You'd just, you know, need to invent the solid state memristor first, and then make it commercially viable, and then redesign all of computer architecture from the transistor on up. Easy-peasy.
Demis on designing DeepMind:
"When we set up DeepMind, I took inspiration for our research culture from many innovative organizations, including Bell Labs and the Apollo program, but also creative cultures like Pixar."
https://www.theatlantic.com/sponsored/google-2023/unlocking-...
As for materials science, there's signs of life in graph nets https://deepmind.google/discover/blog/millions-of-new-materi.... Meanwhile maybe AlphaFold indicates a precedent towards deep learning systems that advance chemistry. I don't claim these are revolutions, but extrapolating into the future, they might be seen as the catalysts. Maybe.
Personally I think chemistry is going to trail a long way behind biology for a while (in terms of ML solutions). The data, supporting libraries, and funding doesn't seem to be on the same level.
I liked Bell Labs ever since 1986 or 1987 or so. I was considering developing a C++ compiler, but was concerned about:
1. would I need a license to develop a C++ compiler?
2. would I need a license to call it C++?
So I called up their head intellectual property lawyer. He laughed, and said feel free, and thanked me for asking. He said other people just went ahead and copied things hoping BL wouldn't notice.
So thank you, Bell Labs! You guys were the best.
Thanks for the story, it illustrates the lack of cynicism in all ranks, compared to Microsoft, Google or even OpenAI today.
I like reasons and conditions, but reproducing culture takes a lot more work. Though one feels the character and actions of Mervin Kelly were crucial.
Microsoft, Google or even OpenAI today.
Probably an unrelated question, but why the "or even" caveat for OpenAI? They claimed to be an open non-profit, cynicism seems warranted at a level even moreso than Microsoft and Google.
You are right but for OpenAI there could exist in principle a strategy of beating the cynicism out of them by holding them to their word. I might have wanted to encourage people to be optimistic about that.
Who would do the beating there though? And why would OpenAI actually care to stick to their word?
The whole CEO debacle seemed to make it pretty clear to me that OpenAI wasn't interested in being open or non-profit.
Elon might have stuck to his lawsuit, or someone like PG could come out strongly against Altman, but you might need someone of Mervin Kelly's stature and an army of lawyers to do it. Maybe Fei-Fei Li after she has earned her startup cred (so that the suits and politicians have the chance to take a technical leader seriously, put her on the board in place of Larry Summers, e.g.)
What's the ideal end acenario if someone does successfully smack them back down to stand by their original promises? And what's the worst case scenario if no one stands up to them?
It might be better for one's sanity to start looking at what people and orgs do instead of what they say or what others imagine/hope they do. It's a business and and the end of the day has to return all the money Microsoft invested in it. Ethics is what toilet rolls are made of at large corps.
Today, maybe, but I doubt you could have said that of Bell Labs, at least by Occam's razor. (Maybe they weren't a business all by themselves, but their output was effectively deployed to the parent concern by most standards)
It's always interesting to read how many of the silicon valley startups were founded by people who had IP they'd developed at existing companies where they were told "we're not going to sell this feel free to take it and start a company". I feel like large companies are more reflexive about holding onto IP these days.
While the current behavior likely is a consequence of the approach then. Their corporate lawyers don't want those legal aspects of the Unix wars, again and created their moat.
Ironically wholesome lawyer story. That's a short list btw.
I'm pretty sure this is him:
https://www.forbisanddick.com/obituaries/William-Bill-Ryan-2...
Damn, RIP.
AT&T's liberal policies regarding software (and computer hardware) weren't out of the goodness of its cold blue corporate heart, but one episode out of over a century of antitrust enforcement against the original Bell System and its subsequent incarnations.
Most specifically, the 1956 consent decree arrived at by the US Department of Justice. In part:
The consent decree contained two main remedies. The Bell System was obligated to license all its existing patents royalty-free, and it was barred from entering any industry other than telecommunications. As a consequence, 7,820 patents, or 1.3 per- cent of all unexpired US patents, in a wide range of fields became freely available in 1956."
<https://www.aeaweb.org/articles?id=10.1257/pol.20190086>
<https://pubs.aeaweb.org/doi/pdfplus/10.1257/pol.20190086> (PDF)
My understanding is that the decree also
specifically* prohibited AT&T from engaging in either computer hardware or software markets. Consequence is that when AT&T developed operating systems, compilers, and associated software for the UNIX System beginning in 1969, it literally had no option but to give the fruits away gratis, or hold them internally.That restriction was lifted after the 1984 anti-trust settlement, though it took AT&T a few years to start acting on that. Once it did the result was the "Unix Wars" of the late 1980s / early 1990s, which amongst other effects put a strong damper on commercial viability of BSD-based Unix systems, whilst an alternative coded-from-scratch run-alike system originating out of Finland faced no such headwinds. That system was Linux.
(This history is ... referenced in numerous sources, though I'm finding it difficult to turn up specifics. The 1956 DoJ settlement doesn't seem to be online at all, and this isn't the first time I've searched for it. Wikipedia discusses various elements of the AT&T antitrust actions, dating to 1913, but is fairly poorly sourced, see: <https://en.wikipedia.org/wiki/Bell_System#Kingsbury_Commitme...>. I'm pretty sure that the Unix / Linux history is covered in various sources, possibly some of ESR's writings or elsewhere. I'm pretty confident of my facts here, but would prefer being able to source my statements more concretely.)
This is all true of AT&T but remember Bell Labs was a subsidiary of AT&T, and by all accounts from the ground, the Bell Labs management were able to promote a culture shielded from the politics of AT&T.
(Compare with GoogleX, similar intentions, maybe better starting conditions)
IP issues-- an interesting case study would be how the BL managers handled Shockley.
Lets sat they had asked for some small royalty(3-5%), with simple legal terms and fast approval process, and it common accross all their IP.
Would that be a problem?
And isnt that a good thing, for the "ecosystem"?
I was born in ‘88 so I’m envious of people who have stories like these. I’ve yet to experience working for the kind of leader seemingly prevalent back then.
None of this addresses perhaps the main issue. Bell Labs predated the venture capital revolution of the 1970s.
The key insight of venture capital was that firms like Bell Labs were holding on to very valuable resources at compensation rates that were far below what those resources could generate if they were empowered to create their own firms.
This was tremendously successful. While we have no counterfactual, innovation in the US blosomed ove the last 50 years. The fundamental research may have languished (I would probably disagree, moore's law didn't happen through magic for example) a tremendous number of companies, and all of the large companies we know of today, which provide all of the services of the modern world were a result of venture capital.
Moore's law is dead in the USA for almost 20 years. The improvements we see come from research in Asia and the Netherlands.
Also the idea that US tech companies are doing much innovation is debatable. They are designing products for the international markets, but the core technologies are more and more coming from Asia.
ASML is using basic research that was funded by the US government; that is why the US has the ability to veto/restrict who ASML sells the encumbered EUV and related other tech innovations, to.
Yes, the research was initially done in the US. The current research and development is now done by the group in Netherlands. There is nothing in the US that comes close to what they have now.
It's not just ASML that is involved in R&D for new chip nodes. And ASML itself has a large part of its operations in the US.
One of us has our geography confused, I think.
VC is great for anything that doesn't need continuous cooperation from more than 1 domain of expertise, or where no expertise is even needed at the outset. Meritocracy, but atomized meritocracy.
Hence, great for software (and certain Schwerpunkte in hardware scaling) --- not innovation in general, but in fact solidifying a technological caste system: designers > SWE > HWE > every other kind of technical expert
Otherwise, you do need a Bell Labs to make sure your various experts are talking to one another and not constantly in fear of backstabbing by the management, their interns, other experts. (Especially considering in universities these days, rival groups in the same department have to make the same kind of calculations before cooperating)
I can see that YC does pay lip service to building community and such, but perhaps VC is too successful for its own good.
Relevant username, I guess
I'm not sure that last part lands quite the way you meant it to.
You will not "recreate" it, as everything now is short term shareholder profit/dividend driven. As others and the article notes, they held the long view on innovation.
We could really use a large number of bell labs today. It will not happen.
Yes, the trillion dollar market cap businesses got to where they are by focusing on short term shareholder profit (what does that even mean?) and dividends.
Nvidia famously did not play the long game. Nor did Apple with their smartphone bet, or Amazon with their 10+ years of near zero profit margins while building a nationwide logistics network. Or Alphabet with their data centers, plowing billions into YouTube to give access to streaming HD video on demand around the world.
All of your examples could be negated, I will just do one as an example:
Apple did not spend 20+ years developing the smart phone thinking "I hope this research pays off". The same for the rest of your examples.
Bell Labs, OTOH, was literally like "go do fundamental telecom and computer research, and we will see if we can use it to get ahead".
Those are two different mindsets. Amazon had one goal so did Nvidia, Google, etc, with each venture you cited. Bell Labs was a _research park_ of exploration that we do not recreate today.
I don’t see how you are negating my counter examples from your claim. If you are classifying decade+ investments as long term, then we will simply have to disagree.
as everything now is short term shareholder profit/dividend driven. As others and the article notes, they held the long view on innovation.
While things might not be as hunky dory as in the heyday of Bell Labs, I don’t think “everything” is accurately described as short term either.
Craigslist is an example of where this doesn’t occur but the innovation also doesn’t exist.
It's not just a corporate phenomenon. Academic research has increasingly shifted towards a goal of regularly churning out papers for the sake of getting publications, rather than being driven by curiosity and a desire to share knowledge. There isn't a proper long view in academia either and I think their heuristic for the greedy algorithm is even worse than just maximizing near term profits.
Lots of innovation came from Bell Labs or other national laboratories that are foundations of many modern technologies. I always thought that private sector alone cannot make technological progress fast without the help of the government sponsored science facilities. It is incredible to see what came out of those facilities because the scientists worked really hard because of their genuine interest in working on such projects because they either like the process of figuring things out or their curiosity cannot be quenched until they get it.
Also, it seems like promising (a real promise that can be backed up) researchers a comfortable upper middle class income, job security, good benefits, and light-touch supervision is a good match for the type of person who excels at basic research.
Too little security and nothing ever comes to fruition. Too much money sloshing around and you attract the wrong sort of crowd.
Public entities are best suited to providing this type of compensation and employment arrangement.
That sounds exactly like my employer, the JHU Applied Physics Lab. We’re a university affiliated research center, so we aren’t tied to the government pay scale and can provide a better career advancement outlook.
Don't grants still tie you to government pay structures? I know in biology most academic research labs may as well be government entities because that's the main way to get funding, and that funding has various rules attached including e.g. the max pay for a postdoc.
No, we don’t get funding through grants. Our work is either direct sponsored work or internal research funded through overhead. We aren’t a federally funded R&D center like Laurence Livermore or MIT Lincoln Labs, we are a university affiliated research center.
We exist in this nice middle ground between academia and pure profit companies where we do a bunch of research and exploratory work without the pressures of trying to get tenure, while also getting to push real products out to our sponsors but without ever worrying about getting laid off because we didn’t meet some growth number for the quarter.
Many outfits have tried to reinvent the Lockheed Skunk Works, but to no avail, because they always tried to improve on it by removing its core principles.
Though I must say, SpaceX may be its worthy successor.
SpaceX doesn't really do any sort of frontier research. They do technology. They are taking concepts that have been done in the past and doing them more economically.
They have pushed the state-of-the art with projects like Raptor-2, but doing fundamental research and simply doing engineering are different things.
Skunk works also did technology
The radar-invisible shapes came from a Soviet research paper!
The C programming language was a successor of other programming languages, too. Not a completely new entity, the first programming language of the world.
Innovations that aren't on the frontier of knowledge, but have great potential for mass deployment, are precisely those that tend to have the biggest impact on humanity as such.
The key thing is time horizon.
Bell Labs invested in research that would bring payoff 20+ years in the future. That's in part because they were a quasi-monopoly. (Also because there was less pressure back then on execs to focus on short-term stock prices.)
It's also because Bell Labs ran a lot on government contracts and grants. The government CAN look 20+ years in the future. And it does.
You can see the same effect in pharma today. Pharma R&D develops drugs that will hit the clinic in the next 5-10 years at most. The true basic research of identifying targets and understanding cancer/Alzheimer's mechanisms to launch future drugs -- that's all funded by the government.
Agreed on the public-private nature of Bell Labs.
I would also ask how we could recreate another exemplar in that space, DARPA. Now of course DARPA is alive and kicking, but what about another DARPA, say in Europe?
The two organizations did have different missions, with DARPA being mostly an investor while Bell Labs was mostly a practitioner.
The question that springs to mind is, why did Bell Labs decay while DARPA did not?
Altos Labs, which specializes on a very complicated topic (anti-aging, longevity), is fully privately funded, though.
Even Bezos can look 20 years into the future and doesn't want to die.
Bell Labs and things like it were around when the primary purpose of a business was to make money, and do it healthily, rather than to find a bigger fish. It's no wonder many of these companies were quite old. They created sound businesses.
For the giants of today it's all about milking existing products for all their worth. Not finding new revenue avenues. The modern stock market is to blame. If number goes down then the earth is on fire apparently.
The other aspect is, why try and innovate yourselves when you can wait for someone else to do it and then try and acquire them? It's short term thinking but that's what business is largely about these days.
Maybe this will change with time. I suspect it will as the world is on a track to become more insular again. This may drive innovation at home.
Back in the day, people wanted to build a sustainable, long-term business. Now they just want to create profits, and it often happens by dismantling these great decades-old companies brick by brick, until the line don't go up no more. Then you collect your bonus and move on to the next victim. GE and Boeing have been gutted. What valuable company is next?
We live in a time when the incentives are ass-backwards. Your customers should be first, your employees second, and only then, all the way in the back, are your shareholders. When the shareholders are placed first, you find yourself in broken, late-stage capitalism.
For the giants of today it's all about milking existing products for all their worth. Not finding new revenue avenues.
AWS/Azure/Google Cloud?
Apple Watch/airpods/M processors/Vision Pro?
Meta spending tens of billions on VR?
Nvidia designing software to advance AI?
TSMC and ASML and others working to get chips more efficient and powerful than ever before?
Tesla popularizing electric vehicles and establishing charging networks.
The gent in the photo was my postdoctoral advisor at Murray Hill. He's standing by the first CO2 laser which he invented and that was still in the lab I worked in when I was there.
Very cool. What did you use it for?
The laser was 30 years old at the time and kept in expectation that it would go to a museum, perhaps to the Smithsonian or equivalent. I didn't use it in my work. My advisor's retired technician would still come in once every couple of weeks because of his love for the Labs. Towards the end of my stay he and I put in fresh gas tanks and fired it up. All the 1960s vintage glass, power supplies, mechanical and optomechanical systems worked and it lased like a champ and made firebrick beam dumps glow.
I don't know what became of it.
From a non-tech perspective, IKEA, as a private company, shares a similar focus with Bell Labs on enhancing individual components to improve the overall system.
What are some examples?
What I've noticed buying Ikea stuff over the last 30 years so that it's gotten progressively lower-quality.
Take for example their Kallax shelves. I've purchased them at least 4 times in the last 20 years, each time, a part that was metal before got replaced by a plastic or wood part, the wood got thinner or less dense, .... They still work so maybe that's innovative.
Dunno. Essentially AT&T founding is no different than tax founding. Any uni lab could recreate the conditions.
I believe the academic system is way too gamed nowadays for these kind of institutions to not drown. Like, there is focus on metrics and numbers rather than fundamentals.
The vertical monopoly is the big key. A uni lab couldn't replicate that.
People who are more familiar with Bell Labs — what was the compensation and life was like when one worked there? Were there any extremely alluring alternatives like FAANG of the day, where even a bottom line individual contributor can save a decent chunk of money after half a decade of work?
It’s hard to convince extremely talented and motivated people to go into research nowadays when ad-tech and adjacent products pay so much more. Can’t even blame them, as it’s extremely high risk with very low reward. Obviously there are a lot of exceptions, but still.
I dunno about that. I'm regularly _flabbergasted_ by what hackers will do just for fun, and how they run circles around an industry veteran like me.
Look at the modding scene for games (an extreme example being a 30 year old obscure German game called "Gothic" where the modding community is still building amazing stuff on top of this ancient piece of software). Look at 3D printing, or people dabbling with Arduinos and Raspberry Pis. Look at the Emacs community and all the packages people build with a level of attention to detail you rarely find in commercial software.
Those people are all doing it for fun. That's my dictionary definition of "extremely talented and motivated people". The world is big, and I figure those kinds of people would happily work in a place where they can put their talents to use full time for a livable wage. Provided they don't feel like they got the short end of the stick compared to their colleagues/managers, financially or otherwise, maybe.
Probably not in the average tech company bureaucracy with all the overhead, micro management, politics and user-hostile antics though.
Big oil arguably had some of the closest up until the past ~10 years.
I think folks may not realize how large Exxon and Shell's pure R&D labs are/were. The other majors have cut much more heavily on pure R&D in the past 20 years, but Exxon and Shell have always focused on pure research fairly heavily. It's been cut back in the past 10-ish years, but the R&D labs at both are still present. The breadth of it is significant, and while sure, it's geology, metallurgy, chemistry, etc (i.e. it's not what folks here likely think of as "tech"), it is fundamental research as opposed to applied research. A lot doesn't sure up under their R&D best because it's combined with internal consulting, but it's still a large group of folks hired to do r&d with diverse backgrounds.
Those labs still exist today, but they're not as large as what they were in the recent past (e.g. 2010), and especially compared to their heyday in the 70's.
Considering what those companies did with the most valuable research they ever produced, the climate predictions, they don't seem particularly worthy of emulation.
Technicians in the lead instead marketing men.
Public money instead short fad hunting venture capital
OpenAI was founded in 2015. That's long time for venture capital to wait for possibly getting ChatGPT out there.
I've seen this question pop up periodically, usually addressed in the context of a single, usually historical, example. Bell Labs seems to most often get the nod as the canonical "Blue Sky" corporate R&D lab, though Xerox, DuPont, and IBM are others. DuPont specifically is the subject of a book by David A. Hounshell, Corporate Strategy: DuPont R&D, 1902--1980.
I'd suggest a broader consideration:
- What are examples of extant or historical commercial R&D labs?
- What large monopolistic companies haven't developed a significant R&D reputation, either for failure to establish a significant such operation, or a failure to develop awareness of their operation?
- What business sectors generally have failed to establish a strong R&D tradition?
I'm not going to go into a full answer for obvious reasons (a full response would be an interesting PhD thesis or course of study, not an HN comment), but we can outline some interesting aspects...
The companies which did establish large R&D labs have tended to be in high-tech areas, largely information technology though also chemical and hardware fields (DuPont, arguably Xerox).
Specifically I'm aware of: Bell Labs (AT&T), PARC (Xerox), IBM Research, Kodak Research Laboratories, DuPont Central Resource, RCA Labs, General Electric Research Lab, Westinghouse Research Labs.
Notably absent from singificant US industrial history: Railroads, shipbuilding, Steelmaking (e.g., US Steel), oil (Standard Oil), electric power utilities generally, and the financial services industry.
The aerospace sector stands out for having far less major corporate R&D research (or prominence) than one might otherwise expect. That's not to say that there are no such operations: Boeing, Lockheed Martin, and Northrop Grumman all have R&D operations. A possibly explanation is that to a major extent Nasa and US military branches effectively provide R&D cover for the sector.
My general sense is that there's less such activity in the sourcing sectors --- agriculture, forestry, fisheries, and livestock --- than might be expected, though Bayer (including Monsanto), Syngenta, DuPont, BASF, Cargill, and John Deere do have some activity. Again government research is significant (e.g., USDA, Fisheries & Oceans Canada (DFO), and significiant academic research (land grant universities in the US).
Manufacturing, construction, and transportation generally seem to have a weak presence. To the extent that auto safety labs exist, many are strongly supported by the insurance sector. Device certification (e.g., Underwriters Laboratories) are strongly similar. Both also see government consumer / product safety operations.
Healthcare research tends to be organised around government units, academic research, and specifically-organised new-product (mostly pharmaceuticals, though also devices and processes) commercial operations, the latter often spin-offs of larger established pharmaceutical companies as a risk-isolation / diversification process. (Organisation of Novo Nordisk, recently notably for Ozempic, is its own fascinating story.)
My general sense is that wholesale, retail, and financial services have little R&D in in the spirit of Bell Labs and similar Blue Skies research. There's quite a bit of tightly-focused product development, applied, and targeted research, but in scope, spirit, and freedom that's quite a different approach. There's some R&D in academic settings (retail/shopping behaviours, e.g.), or by government (the US Federal Reserve and the monetary system, adjunct to the financial sector).
Teasing out some general trends, corporate Blue Sky research seems to be favoured by:
- High tech fields, notably information technology, electronics, pharmaceuticals, biotechnology, and to a lesser extent aerospace and energy research.
- Less favourable sectors: traditional manufacturing, utilities, wholesale & retail trade, heavy industry, extractive industries, and transportation.
- A strong and long-term stable corporate structure. Note that AT&T was initially founded in 1885, whilst Bell Labs wasn't established until 1925, 40 years later. DuPont preexisted its own labs by a century (founded in 1803 and 1903 respectively). RCA (1919) predated its labs (1942) by 23 years. Xerox (1906) predated PARC (1970) by 64 years. Labs don't spring up overnight, and even large and technical companies might not form labs if their tenure is short.
- Monopolistic tendencies. Labs tend to form around monopolies, though this may be derivative of longevity. Not all monopolies form labs.
- Payoffs to R&D. In Hounshell's book on DuPont, much is made of the comparative fecundity and sterility of early (1930s) and later (1960s, 1970s) research. Effectively, the viable product search space had much early low-hanging fruit, particularly in the 1930s (most plastics we know of today were discovered over roughly a decade), and the failure to find marketable products or enhancements by the 1960s (notably a synthetic "leather" product, "Corfam").
- Insufficient or ineffective government-lead or academic research. In numerous fields, the most significant R&D still occurs in government / academic settings.
- Corporate mentality and mindset, possibly. There seem to be fields,
including those with significant research potential* in which R&D and sharing cultures simply don't exist to any great extent. I'd put finance and insurance in this group. "Why" remains a good question.Great contribution!
What would it take? Two world wars leading a protracted cold war between two "superpowers."
No surprise that the end of the cold war brought about the end of our collective need of a closely held quasi-national entity like AT&T.
I think the real question is "What would it take to recreate it in the abstract, from the disparate components we have attached to the internet today?" Or to rephrase it "What happened to the early promise of the internet when it was mostly available only through Universities?"
IMHO, the downfall is singularly: ad revenue. Monetizing your website/app/product indirectly in this way has been the sole source of many (all?) negative emergent features of modern software and products. i.e. a focus on attention in lieu of quality directly incentivizes utilizing things like dark patterns, addictive features, etc.
A time machine.
These places (Xerox Parc is the other obvious example) existed because recent breakthroughs opened up the possibilities and they exploited almost inevitable inventions.
I think the mistake is thinking that the flurry of creation is linked to the company or the institution rather than the general state of progress.
As soon as someone invents viable Human-Machine Interfaces or whatever there will be some company that invents a load of seminal stuff around that.
It certainly linked to a company or institution that were able to harness the potential and release the innovation as research or products. Not all of them.did.
It's like wind energy: the wind blows for everyone, but some build a turbine or a windmill. (Though indeed, when the wind us not blowing, even the best turbine is going to be a disappointment.)
At the most basic level it's all about money. ATT had a monopoly on telecom in the US and had the money to set up the research labs. After that you need people that can recognize and fine brilliant people in their respective fields and giving them the resources they need.
I'll leave this here. AT&T Labs is still doing interesting things.
I'm the last relative of William Shockley (cited in the story) who's still in the industry.
I like the way this story portrays his success in solving open-ended problems as long as they aligned with the company direction. I've personally never made that connection between us, but it resonates with where I've personally found success, and currently am, in my career as well.
Fortunately, a lot of our beliefs about the world are quite different.
I would like to ask Col.Karl Nell (ret.) who was working there during Bell Labs' trivestiture: (timestsmped): https://m.youtube.com/watch?v=w9cIcWWsH0c&t=75
Lol, i think C++ is main reason of decline for Bell Labs due to crash.
Corporate dominance.......Sorry!
Didn't high top marginal tax rates in the 1960s-1970s era give a lot of corporations an incentive to put profits back into R & D as it would avoid high taxation on those profits (and I think the R & D outlay was a tax writeoff, too). E.g.
https://slate.com/business/2012/07/xerox-parc-and-bell-labs-...
Everyone talks about the monopoly – don't forget the pressure to Do Good Deeds and to look like You're Bringing New Tech to avoid the general public's grumbling about, well, having to rent your phone from Ma Bell for $$$$$ and to live with their restrictions on what you could hook up to their lines. An expensive PR operation, but a good one for sure.
I prefer cynical explanations for why the C-suite would like to keep Bell Labs around, and other posters have also mentioned managing perceived profits, and avoiding anti-trust.
Being a subsidiary of a government-sanctioned, vertically integrated monopoly gave Bell Labs a broad research scope and freedom to pursue long-term research projects unavailable to most other industrial labs.
This feels like a big-hitter. In the UK, most government expenditure these days is subject to a sort of KPI management process. It's very difficult to do long-term work when the team has short-term targets to fulfill. That said, I wonder if security-critical organisations like GCHQ work that way. Obviously Bell labs was a US phenomenon, but I'd be surprised if the same problem doesn't persist over there to some degree.
Probably a top ten mistake by the government to break up bell labs.
An important early impetus for research at AT&T was the immense labor component of contemporary switching technology. Hiring the number of qualified operators needed to scale out to a fully national phone system wasn't clearly possible, let alone sound business.
The scale of the challenge over time is described pretty well here:
https://www.richmondfed.org/publications/research/econ_focus...
AT&T thus had both a clear long-term goal, national phone service, and multiple long-term research challenges needed to make that goal possible (amplification) and economically feasible (switching).
For folks interested in delving further into Labs history, this recent paper has a wealth of relevant citations, while also clarifying some important corporate cultural framing that might otherwise be confusing:
https://www.researchgate.net/publication/365365963_BELL_LABS...
I wrote a review of a book which describes to me the best strategy for recreating Bell Labs like phenomena [0]
[0]: https://asindu.xyz/posts/the-nature-of-technology-book-revie...
What made Bell Labs so successful is simple:
- public funds
- researchers free to research instead of producing something marketable in 6 months or less
- long term projects and goals as a NORMAL thing instead of being considered folly
- a school system designed to make citizens not useful idiots https://www.theatlantic.com/ideas/archive/2020/08/i-was-usef...
Unfortunately the last point in the present time demand IN THEORY at least some new generations to be achieved, in practice the ruling class do not want at all anything like that so it's mere utopia.
As a bottomline: EU was the most advanced countries of the world, when it do public funded research and industry, it commit suicide with WWI, the USA have copied and advance the EU model after WWII, then commit suicide as well with finaciarisation of anything, now it's China who rule, but the model, the substance is the same: countries grow and prosper with public funded research with long terms goals and researchers free to master themselves with the industry picking marketable research from them instead of commanding research. Countries might vary, populations might vary, implementations might vary a bit, but the principle so far have proved to be the same.
As a result we are in a decline phase, which probably result in a series of LOST wars, and a long era of poverty and destruction before a new start.
Bell Labs R&D budget in today's dollars would be about $1.5 billion. Google is closer to $10 billion, with DARPA around $3 billion and NASA at $10-15 billion. The total budget for all national labs is $12 billion.
https://www.nextbigfuture.com/2015/08/comparing-research-bud...
I was thinking about a very small version of this topic the other day--how can you re-create the intrinsic motivation of a startup in a large company? My conclusion was you just can't. The combination of scarcity, urgency, and potential opportunity present in startups seems to catalyze something that can't (at least maybe not ethically) be simulated.
Maybe in the large what we really need to get it together and pull these things off is immediate existential threat e.g. a world war or crisis of similar magnitude.
To a degree, SpaceX is modern equivalent of Bell Labs when it comes to space technology. They attracted a lot of frighteningly smart people and untied their hands.
When thinking about Bell Labs etc., I often come to the conclusion that barriers matter more than anything else. If you force your engineers and scientists to hold long pointless conferences and produce some papers on regular schedule, plus submit their various work-related demands to an arbitrary set of rules and decision-makers, the result is similar to throwing sand into a mechanical machine.
But these limitations usually exist because of money shortage / funding, and few people or organizations have enough money to just shower them on their eggheads, like Bell Labs once did.
I'm not convinced you'd be able to replicate because it was a product of time and place. Mike Lesk, who worked at Bell (he implemented uucp, his motivation was purely to provide patch distribution according to what he told me) said one of the key experiences he took from Bell was how easy it is to occupy a vacuum and how hard it is to push aside established state.
Unix exists. X11 exists. Kuberbetes exists. Gmail exists. Recreating a small team of highly accomplished experts with motivations which are not aimed at VC ipo, who are mutually trusting, and who have a beneficent employer with nation scale infrastructure they can burn to support you, and you still have the problem there's no vacuum, you're pushing aside other stuff.
Giving birth to C and Unix in the ashes of BCPL and multics with the pdp11 as an architecture, that's a huge one time thing. Invent a time machine and it might not happen the same the second time round.
a ranch and infinite money
Another alien crash.
Crystals! (Tangent, but fun)
Lab Grown Quartz https://www.youtube.com/watch?v=fcu8Lz5PHMw
Alan Holden https://www.youtube.com/watch?v=Wp6bN9vN6e4
Bell Labs is not unique though.
We had NatLab (Natuurkundig laboratorium, physics laboratory) at Philips. It's the birth place of the compact cassette and compact disk and did a lot of fundamental research. The main reason for its closure is usually cited as decentralized funding techniques icw economic downturn, though the betting in the wrong technique in the '90s sure didnt help (e.g. better flat crts instead of lcd).
We don’t need to recreate Bell Labs.
We have SpaceX and Starlink - that’s where innovation is happening. Compare Dishy to the Starlink Mini (which can run off a power bank), or the first live-feed (no radio blackout period) on orbital re-entry. And they costed much less than ULA and provided much more to the public. Practically saved Ukraine (try defending your country with no communications). Or Tesla who has made EVs (and charging stations) an actual thing. Or Neuralink who’s allowed a quadriplegic to use a computer like you and I.
We don’t need giant monopolies like Microsoft who - what did they deliver over the past 3 decades? A knockoff of the Mac. OSes full of vulnerabilities that require antivirus and firewalls. Clippy. Nearly killed Apple (who gave us the first usable GUI, all our songs in our pocket, and the first actually smart smartphone.)
What does Microsoft do that is innovative? They buy actual innovative companies (they “own” OpenAI for all intents and purposes). But also open source dotnet. Typescript. LINQ and EF. And I hate to hand them the trophy, but the Surface Pro is a real computer - the iPad Pro can’t do real work like build you an App or site with VS Code - it can’t even run Apple’s own XCode.
We don’t need giant monopolies. Plenty that smaller companies can do to outmaneuver incumbents. SpaceX will figure out passenger planes before anyone at the ULA will figure out how to make their spacecraft not leak.
A marginal tax rate that discourages profit hoarding.
Any other answer is just wrong.
Lots of companies recreated Bell Labs.
It's not the 20th century though.
Thomas Edison invented the first commercial invention laboratory.
The Wright Bros invented the first directed research and development program, where theories are developed and tested on a series of prototypes, leading towards the objective.
It would require dismantling, or at least scaling down, fintech, because that's where many smart people end up - the ones who would have been inventing Cool New Stuff if they hadn't been co-opted into working as quants.
There have been at least two lost generations of fundamental research because of this.
For those who would like to read more about Bell Labs, I highly recommend "The Idea Factory" by Jon Gertner[0] - one of my all time favorite books.
If you haven't you HAVE to read "The Idea Factory" by Gertner which was linked in the article.
I think the conditions that led to Bell Labs contrast quite well with the "Shareholder Supremacy" episodes of Ed Zitron's "Better Offline" that just played. You can read pre-Welch GE (or 10x better Bell Labs) as the opposite of the aspects of business that Zitron rails against there.
Timing + Infinite Money
This was only possible during what was an unprecedented revolutionary time in scientific discovery which allowed for the possibility 1925 - 2000 was the most revolutionary time across all fields of science, probably ever. It's mind boggling the progress made in that period.
We will likely not see a repeat until someone comes up with a quantum-classical unifying law and the resultant blowing open of all possibilities of engineering
1. LSD hippie culture/aftermath 2. Pre-privatised/embryonic technology industry 3. A few geniuses
The question implies time travel, better to look to the future
Rather than a monopoly, I'd say it takes 2 things (that are often present in monopolies):
1) Big piles of money 2) Enough of a moat that there's no immediate threat from competition
#2 drives a lot of the desire to innovate, the hope being that by the time competition catches up, you've got so much new stuff that they're back where they started.
The closest thing I can think of in the tech space now is Apple with consumer hardware moat. The other big players are still cannibalizing and copying each other's business segments.
You just described google 10 years ago. And I would argue they have been on a downhill slop ever since.
So I think its something else, perhaps great leadership at the right time. Google certianly had the talent, but pichai has been an awful steward.
Don't forget Microsoft ever since the early 90s or so. And I would argue they haven't produced much usable stuff in the research realm at all.
So I think it's more than just #1 and #2. There's also a #3: a national culture where doing great things is seen as more important than simply maximizing shareholder returns. The US had that many decades ago when it sent people to the Moon, but it lost it sometime after the 1980s.
I'd love to see an alternate timeline where one of these giant companies took their trillions of dollars and said "You know what, fuck you, shareholders! We're going to be pure research now! We're going to turn our offices back into a college-campus-like playhouse of innovation! We're going to hire the best and brightest again and give them amazing things to work on rather than improving click-rates by 0.1%! We're going to do only moonshots! We're not going to maximize shareholder value!" The stock price would go to $0.01 but it would be a paradise for researchers and developers.
It would be such a breath of fresh air from all of the dollar-maximizer companies that we seem to be stuck with now.
> I'd love to see an alternate timeline where one of these giant companies took their trillions of dollars and said "You know what, fuck you, shareholders! We're going to be pure research now!
If this alternate timeline where giant companies are autonomous creatures were to be, why would there be shareholders?
why would they even be companies?
Not sure if true but i read a funny: bezos entire fortune could keep the pentagon afloat for 2 whole weeks.
Forbes is saying he's got $211 billion so more like 3 months if we're comparing to the whole dod budget
That's shares number of shares times current share price, not money. If he starts selling those shares they won't all sell at that price.
> why would they even be companies?
Because that's what was asserted to be in this alternate timeline. An alternate-alternate timeline could have no companies, sure, but the discussion is centred around a specific alternate timeline.
Maybe the companies could be run by LLMs. They will still need shareholders because, well, as far as I know at least computers can’t own property yet.
An alternate-alternate timeline might have companies run by LLMs, but the alternate timeline under discussion states that the giant companies act on their own accord, with their own dollars – which, again, questions where the shareholders fit.
As much as I dislike Elon these days, this is basically how all his recent companies have been setup. Tesla, SpaceX, Neuralink, and arguably even Boring are all moonshots, the latter 3 being so anti-shareholder that they aren't even publicly traded.
He might be anti shareholder but that does not mean he is pro worker. https://www.businessinsider.com/elon-musk-reportedly-terrifi...
Why would being anti-shareholder make someone pro-worker?
There is nothing special about Elon choosing to do that, it isn’t some contrarian masterstroke.
Of Arianaspace, Blue Origin, Rocketlab, SpaceX, United Launch Alliance: only Rocketlab is publicly traded.
Google more or less did do this. Why do you think they created Alphabet? To house all of their experimental moonshot companies.
I thought it was for regulatory shielding
I don't think that works either.
You need a playhouse of innovation, but you also need a pipeline of taking that innovation and making lots of money/practical applications. And you need the interaction between people doing practical work and feeling limitations and the playhouse trying stuff out.
Bell Labs worked because AT&T had huge field operations, a huge automation problem (telephone switching and billing), and a huge manufacturing arm for telco equipment and customer equipment, and a reputation for solving problems related to communication. Small innovations easily contributed to the bottom line, and large innovations enabled massive expansion of communication. Also, there's a huge benefit from research being connected to scale --- Bell labs innovations were likely to be applied in AT&Ts businesses quickly which informs future innovation; if you're an unconnected playhouse of ideas, it's hard to know if your innovations can be applied.
For better or worse, things are less integrated now. Telcos still have large field operations, but they don't usually manufacture the equipment. Automation of telephone switching is so complete, I don't know if there are any operators anymore. Billing for usage is very limited.
You see some companies developing the processors in equipment they use and sell, which is more integrated than when it was all Intel/AMD/Qualcomm.
And then...the company goes bust? Why would anyone do this?
There seems to be some amnesia about the sort of company AT&T was. AT&T was first and foremost a profit maximizer. They were the poster child for abusive monopolies. Remember until the 1980s, literally no one owned a phone. All hardware that connected to a phone line was owned by ATT.
I would disagree. C# and F# have been good. Advances in Haskell. AR advances. Those are all interesting, just off the top of my head.
For a while, but they seem to have faceplanted in this realm.
Adding to your list, Z3, Lean, F*, Koka are all non-trivial I think.
Other than Haskell, OSes in memory safe languagess whose System C# and Bartok features starting arring in .NET Native and .NET Core, Haskell contributions, plenty of Cloud stuff on Azure, Azure Sphere with Pluton, a LLVM like compiler stack using MSIL, contributions to OpenJDK JIT, Q# and Quantum stuff, ...
If you count OpenAI , Microsoft has produced a lot. Bell Labs existed in an era of conglomeration, where Microsoft existed in an era of startup culture, but the idea of monopolies spending cash on research is still there
The corporate tax structure was different back then as well. Much higher tax rates and it was far more difficult to simply ship your profits off to another more tax-advantaged nation. If the choice is spend the money hoping to find the next great innovation or hand it to your government and hope they do something productive with it, companies are usually going to choose the former.
Today, you can license your profits to Ireland or some other off-shore holding company and never have to choose.
> So I think its something else
The government allowed Bell to maintain its monopoly position with the understanding that Bell Labs would work for the public good. In other words, Bell was unable to capitalize on what Bell Labs created. Bell Labs, in hindsight, is so significant because everyone else was able to take their research and do something with it.
Google has given us things here and there (the LLM craze stems from papers published by Google), but it seems most of the research they are doing stays within Google. For example, I expect you will struggle to find what you need to start a self-driving car company based on Waymo’s work.
> Google certianly had the talent
Including Bell Labs alumni. They used their time to create Go. If that isn't game changing, perhaps Bell Labs was also a product of its time?
Go is basically Limbo, with a bit of Oberon-2 sprinkled on top.
They got more lucky with Go than with either Limbo or Oberon-2, thanks to Google moat, Docker pivoting from Python to Go, Kubernetes pivoting from Java to Go, both projects hiting gold with devops community.
Google itself keeps being a Java and C++ shop for most purposes outside Kubernetes.
Google not using Go keeps with the Bell tradition of not using Bell Labs work, I suppose, but otherwise how does that relate?
Bell Labs did use UNIX, C and C++, otherwise we would be luckly using safer environments.
It doesn't seem that they ever really used Plan 9, though. Are you trying to imply that Bell Labs was just few trick pony, or what? I'm still not clear on what connection your story about Go/Limbo/Oberon-2 has to the topic at hand. It seems you forgot to include the conclusion?
The Plan 9 guys are UNIX guys.
Punchline is without Google's moat, Go would have gone the way of Plan 9 and Inferno.
> The Plan 9 guys are UNIX guys.
Who are also the Go guys, but we already know that as it was already talked about much earlier in the thread. If you have some reason to return to that discussion, let it be known that you have not made yourself clear as to why.
And that relates to the topic at hand, how? I am happy to wait. No need to rush your comments. You can get back to us when your conclusion that connects this all back to what is being talked about is fully written.
"Including Bell Labs alumni. They used their time to create Go. If that isn't game changing, perhaps Bell Labs was also a product of its time"
Failed twice to create anything else, only succeeded yet again thanks to Google moat and a set of lucky events caused by Docker and Kubernetes rewrite in Go, followed by their commercial success.
Bold claim that Go succeeded. A couple of software projects used by a tiny fraction of the population (hell, a tiny fraction of the software development population!) is of dubious success. Just about anyone's pet language can achieve that much. What sees you consider it to be more?
Also interesting that you consider UTF-8 to be a failure. From my vantage point, that was, by far, the most successful thing they created. Nearly the entire world's population is making use of that work nowadays.
Still missing that conclusion, though. Again, don't let me make you feel rushed to get your replies out. I am happy to wait until you are complete.
It seems like there was something special about the innovation derived from collective spirit during that time, the 1940-1960s. Post-depression and -war momentum combined with major scientific and technology milestones that had to be passed, with a dynamic caused by an unprecedented global situation.
What was most special is that it was new, few were doing it, and so any achievements stood out and were able to make a splash. Nowadays you can't even step outside in a rural area and not bump into someone who is trying to make their mark.
We now see more achievements made each year than Bell Labs achieved in its entire history, but because of that there is no novelty factor anymore. It is now just the status quo, which doesn't appeal to the human thirst for something new. Mind-blowing discoveries made today are met with "meh" as a result.
It's kind of like a long-term relationship. In the beginning feelings are heightened and everything feels amazing, but as the relationship bonds start to grow those feelings start to subdue back down to normal levels. Without halting innovation for a time (perhaps a long time), to the point that we start to forget, I'm not sure there is any way to bring back the warm fuzzies.
Some major drivers of innovation for that time period:
A lot of people were very recently very upskilled on the government's dime. A shitload of Navy personnel had just been trained on things like electrical engineering to better understand the radios and radars they were operating, and others were taught how computers worked and accidentally turned into early software developers.
Labor was in short supply so companies had to provide significant benefits and eventually pay to make their business work. Low level labor making more money directly enriches a majority of Americans at the time, and a rich base like that has more resources to devote to buying goods but also investigating WHICH goods are worth it, ie making market competition actually work instead of just being about who has the lowest sticker price, which is a direct result of modern americans being dirt fucking poor in money, time, attention, resources, etc. A shortage of labor and therefore increase in the wealth of the everyman may have also contributed to the renaissance.
The US was still PUMPING money into basic sciences and basic technology research. This is especially useful for materials science which feeds a lot of engineering.
So many people had been recently put into such high positions that there wasn't as much of a "Management class", workers were managing workers. People could focus on doing things instead of office politics, ass covering, and pretending to look busy to people who have no freaking clue what they are managing.
Germany lost IP protections for many things. This results in a fertile ground for innovation, because everyone can freely make a widget that matches the now invalid german patent X, so there is high compatibility in widgets, while also being high incentive to improve your version of Widget X in some way so you can patent that improvement, meaning there's a lot of exploration going on in the solution space. Patents explicitly freeze investment into large solution spaces, especially with how vague modern patents are, how incompetent and undertrained/understaffed the patent office is, and how hard it is to get a court to invalidate bullshit patents.
Basically all those things businesses insist are bad made the world better for consumers, and more importantly, people. State investment into the population paid huge dividends. Who could have guessed.
And they did innovate in those days, eg, Gmail and purchase and integrate very innovative things like Writely (Docs), Grand Central (Voice).
At some point they got the idea that because they made so much money, new innovations could not be valuable if the profits would likely be insignificant compared to the existing oil gusher.... which is stupid, because nothing is ever gonna look that good early on, so they took on the persona of a flaky ADD schemer -- like Cosmo Kramer from Seinfeld -- always scheming with some new idea, but never with the perseverance to stick with it for more than 25 minutes before moving on to something else -- because what can possibly ever compete with ads?
Google actually tried to be a "fuck you investors, we're gonna do research" company with the supershares, but they screwed up:
A big mistake they made, which then got replicated throughout the industry, was issuing stock to employees. It meant that even though Larry and Sergey had supershares and no shareholders could countermand their beliefs, the employees (especially middle managers) could, by changing their work decisions to try to boost the stock, thus inoculating the company indelibly with the wall street short term thinking disease that the supershares were supposed to have prevented.
It was pathetic how interested in the share price the average L4 Google employee was when I was there in the mid-teens, at least around the quarterly vest date.
Whoops. Should have listened to Buffet-- never give out stock. Give performance bonuses, but never stock. You cannot depend on a large mass of people like that to stick to a higher purpose and not get tempted by the money.
We individual employees had little control over stock and I don't remember anybody being particularly delighted when Patrick left and Ruth took over, despite the stock going up up up. I don't think our GSUs were influencing work decisions on aggregate.
What was influencing decisions was just the same as always: ad revenue. I think people who worked outside of ads at Google probably didn't get the sense of it. I came into Google as part of a competition crushing, monopoly focused acquisition, so I started off cynical. I could see right away what Google was, just a money printing machine with everything else they did a way to flush some of the excess gravy away. And because of that it all lacked purpose or real motivation.
And into that void just crept more and more careerism and empire building.
Agree about the ADD thing, but I think it's more a product of Perf economy than stock price.
I lasted there 10 years for lack of anything else worthwhile to go to in my geo-region, and because the money was too good to pass up. But what a weird, weird, weird place. Wandering around the MTV campus was like being in a real life Elysium (movie) or something. I could never figure out what all those people were doing between free meals. I felt guilty taking the money, the whole time.
Your experiences sounds very similar to mine.
This is precisely how I would summarize it too. Very well stated.
That I agree with, and I already regret implying stock was a bigger part of it than Perf/Promo-- I just wasn't thinking about them when I wrote that.
Perf was a bigger factor for sure from our ground level (L3/L4/L5 presumably) and was the main reason I left, as all I saw around my on my team was people doing things that added zero value or even subtracted it, but at least they created a new project that would sound good for Promo committee. Managers helpfully shepherded their team in that direction, which was kind and all for growing our careers and helping us get more slop from the trough, but it still felt gross. I couldn't bear seeing that as my future.
Sometimes, when I look at the housing market, or think about colleges for my daughter though, I regret not playing a long a little longer. I don't know.
THAT SAID: I don't think the stock incentives affected decision making at your or my level, but at higher middle management levels, where GSUs started to become huger both in absolute value and percentage of compensation. At the levels of the people actually deciding what products to close down or pursue. I wasn't up at that level so this is just a theory I admit.
Yeah, fair enough -- I still don't know if I've done a good job of successfully explaining to myself what the fuck it was I experienced working there and why, but it sounds like you and I had very very similar experiences.
Everything about the internal incentive structure was fucked up and incentivized very low value producing behavior.
And yet, I did learn some really valuable things there
- napa cabbage is delicious
- celery root is a thing
- fresh made pasta is better
- quiche and arugula pair great together
- oh my god legit ramen is amazing
Oh wait, outside of the food:
- monorepos, with the right tooling, are unbeatable. all the monorepo haters just don't have the right tooling. google did
- doing meetings right, with agendas and minutes, is also unbeatable, strictly better and more effecient than the alternative
- doing planning right, by circulating proposed design plans for comments, is strictly more effecient than the alternative
- doing cloud deployment right -- this is irrelevant now, most everyone knows how to do this now while google is stuck with legacy borg configs, but for a while, google actually was far ahead of the industry on this
Despite having some fundamentals like that that made certain things way way more effective, it was squandered pursuing cockeyed goals.
Yep. Monorepos and fully vendored third-party deps, the two things I wish other people were doing... because the way the bulk of the industry is doing it is a stupid productivity killing, security, and reliability nightmare.
I was always a square peg in the round hole there, and I have no idea how I lasted 10 years. I was at L4 the whole time and never made the move to even try for L5 because the perf process seemed like such stressful garbage to me and I was perfectly fine with my already generous compensation.
I miss that kind of $$, but oh well. It got to the point I couldn't stand it anymore, and after watching the insane growth in hiring... I knew layoffs would be coming.
How dare someone look out for their own financial well-being!
Well, if you gathered this big mass of people by dangling a nice pay and stock options in front of them then yeah, it does seem a bit odd to lament they are tempted by money.
Google did launch a "moonshot factory", it just never really struck gold, with the possible exception of Waymo.
https://x.company/
Kubernetes seems to have gotten quite a bit of traction. I hear good things about transformers these days too. a lot of people I know use chrome
they did this ten to a billionth thing where everyone got to submit ideas. They hired an army to look at it all before it went into the incinerator.
Im kinda curious what machine learning or llms could make from such a dataset. If enough people proposed something similar funding shouldn't be an issue. perhaps there is even an apples vs oranges metric for quality to be had.
what would be your argument for saying Google has been on a downhill slope ever since?
Even when you create the conditions there's always some luck factor. Whether it is leadership with the right vision or the right people in other key roles to create innovation. But I feel like the 2 points above are sort of the starting point that makes it possible for the other things to matter (or more likely, flukes can occur).
I was just going to say “Google 10 years ago”
I’ve also seen it with biotech companies like Genentech. They are making money hand over fist, more and more each year, so they can try for moonshots and give researchers a lot more leeway to do things their way.
But once the money dried up, it was mostly gone. Got bills to pay and need to show returns on R&D in the near term.
Part of the problem, I think, is that Google's relentless focus on information is just never to end up being as sexy as the stuff Bell Labs worked on over the years.
You are forgetting a vital element: the C-suite must want to plow the money into research instead of lining their yachts and various houses with it.
Somewhat less cynical view, but maybe corporations were allowed to be more long term focused in the past?
AT&T wasn't publicly traded until 1984, so there's a clue for you right there
By 1984 AT&T had been listed on the NYSE for over 80 years.
“The American Telephone and Telegraph Company (AT&T) was listed on the NYSE on Sept. 4, 1901. Only 11 other companies have been listed longer than AT&T.“
Yeah, impressively wrong gp! 1984 was when they were broken up by Justice Department.
Or maybe they had to be, with interest and tax rates what they were?
Could you elaborate or be more specific? Given how long BL existed, I’m not sure I can pin down what rates you’re talking about
One of the commenters on the original article said "Bell Labs was funded through a 1% tax on Bell's overall revenue, as well as a claim on a portion of Western Electric's revenue, so yes, AT&T was able to recoup Bell Labs' expenses."
This would protect it from (at least certain types of) C-suite predation presumably.
You don't need to mandate a short term view when the executive's massive pay packages depend on stock valuations: They will chase the money all by themselves, not trying to build a company that might make major discoveries eventually
It's not like corporate executives have carte blanche to make these decisions. They report to the board, who represent the shareholders, to whom they have a legal duty to maximize profits. The entire construct is set up to promote the alternative to building the next Bell Labs.
Why can't we just fund scientists to acheive these goals? Funding has been lacklustre for years.
Because the marketplace is more efficient at directing the funding. I’m not a cheerleader for capitalism either, but it does work better sometimes.
If by more efficient, you mean more efficiently switching to stock buybacks to pad owner pockets outside of tax consequences.
3) tax laws that mandate buybacks + dividends cannot exceed more than a % value of R&D + CapEx spend.
Otherwise 1+2 result in the C-suite deciding to divert all that cash to their pockets. Reference: the past 40 years of big business
"1) Big piles of money 2) Enough of a moat that there's no immediate threat from competition"
SpaceX right now is in a very similar position.
3) Politically savvy managers with a technical background, who treat money/sustainability as an enabling factor and see the primary output as being technical.
The closest I every came to this was Australia's DSTO in the late 1980s. The management was technical and cared deeply about their people, the development of those people and making a positive technical contribution.
An interesting recollection of Bell Labs:
https://quello.msu.edu/wp-content/uploads/2015/08/Memories-N...
I think it's not about recreating Bell Labs, but doing something better that will fit in with today's technical ecosystem. Are entry barriers lower today, or is that part of the problem: low entry barriers mean people don't have to congregate to get sufficient resources?
Based on the book "The Idea Factory" (https://en.wikipedia.org/wiki/The_Idea_Factory), it sounds like the regulated monopoly forced certain constraints on their business, such that they had to prove they were a utility. This meant not having infinite profits, and that the engineers were generally allowed to think long term since any good idea in the future could benefit the company without short term profitability risk
Hunger.
Hunger and some money.
I think it requires a third thing: long-term thinking.
Other commenters noted that Google and Microsoft had big piles of money and a decent moat. They have not created anything like Bell Labs. Google tried, with DeepMind.
The problem is that companies used to think about 5,10,20 years out. Modern companies are gauged and gauge themselves by metrics that are quarterly or annual. They, to borrow from The Fast and the Furious, are living their life one business quarter at a time.
I don't think you can build something like Bell Labs, PARC, or the Lockheed-Martin Skunkworks with short-term thinking.
Google or Microsoft are pretty close nowadays. Look at all the infrastructure both of these sponsored over the years. It may not be as foundational as what BL did, but then again there is not as much low hanging fruit today either.
Also Google and Microsoft for the most part are not even trying to be innovative. Rather they try to milk out every last cent of existing products, driving people to buy their cloud services, or directly or indirectly sell user data to third parties.
There is an insane amount of innovation happening at Google and Microsoft et al. The amount of investment going into efforts like making data centers more power efficient, making better cooling systems, reducing latency or using fiber more efficiently etc is incredible and rivals the work done at Bell Labs back in the day. You just don’t hear about it because these private companies have no incentive to share it.
And that’s entirely separate from the fact that Generative AI wouldn’t even be a thing if not for the research that Google published.
Literally just watched an internal talk about this topic at work (Microsoft). Lots of cool internal research to make things better in every domain, but as you said, it won't be shared that much.
The Open Compute Project is great though, and MSR does awesome research across many domains too, as does Alphabet.
Was the Bell Labs Systems Journal shared outside of Bell Labs contemporaneously? I have original copies of the Unix issue, for example, but have no idea if that was 'generally available' back when it came out...
Without deeply researching the topic, my understanding is that Bell Labs didn't really "open source" everything or really most things. Just look at the later law suit over Unix.
See https://en.wikipedia.org/wiki/History_of_Unix :
"Due to a 1956 consent decree in settlement of an antitrust case, the Bell System was forbidden from entering any business other than "common carrier communications services", and was required to license any patents it had upon request. Unix could not, therefore, be turned into a product. Bell Labs instead shipped the system for the cost of media and shipping.
...
In 1983, the U.S. Department of Justice settled its second antitrust case against AT&T, causing the breakup of the Bell System. This relieved AT&T of the 1956 consent decree that had prevented the company from commercializing Unix. AT&T promptly introduced Unix System V into the market. The newly created competition nearly destroyed the long-term viability of Unix, because it stifled the free exchanging of source code and led to fragmentation and incompatibility. The GNU Project was founded in the same year by Richard Stallman."
I'm well aware of the history of Unix. My point was simply that Bell Labs was historically not a particularly open organization outside of the bounds that they were required to be by law.
There has always been a lot of good research happening in MS, but what we see as end-users are Recall, and the latest Bing news in Edge. Seriously, who are the PMs that allow this cr*p?!?
The ones whose KPIs depend on making it happen.
do you happen to know why the tremendous progress at BL was shared (did it take a long time?) whereas the progress that happens at today's datacenters are mostly secret? I fear that no matter how much progress they make, if it's not eventually shared, it'll just be lost/wasted and others will have to reinvent it.
My take is that it’s related to the parent commenter’s thoughts on the relative monopoly that Bell had.
If you’re a monopoly with no practical competition, sharing your accomplishments gets you good will and has little downsides. But if you’re Microsoft, and one of your big moats and competitive advantage is the massive fleet of data centers you’ve been building up over the years, you don’t want to hurt yourself by giving your competition the information they need to build new, more efficient data centers.
So if Microsoft discovered something very useful (e.g. new battery technology, or more efficient air conditioning system), but decided that it was something they didn't want to develop and market; would they share the knowledge or just bury it in case they might want to use it someday?
I imagine they would do something like this: https://news.microsoft.com/source/features/innovation/datace...
These innovations don’t always have to be “marketed” to be shared. Things like this get developed and used internally, and then sometimes the company likes to brag about their accomplishments, even if it’s not an externally facing product.
https://blog.google/outreach-initiatives/environment/deepmin...
self driving cars and transformers not good enough for you?
What self driving cars? None of them are up to the task from what I've seen.
Waymo's got an open-to-public service in SF & Phoenix. Testing in LA & Austin now. It works really well, and has been expanding quickly (relatively speaking). https://www.nytimes.com/2024/05/22/travel/self-driving-cars-...
Not when hipsters in my town strand them with a sack of flour.
Someone from The Valley told me a couple (~8) of years ago that he didn't like the way SV startup culture had transformed. He argued that in the past, the culture of startups in the valley focused on innovation and disruption. VCs would fund moonshots and small teams doing crazy bew thing.
But nowadays, the valley looks more for "scalability' projects, things that will sell to millions of people.
He blamed the cost of living/hiring in the area. He mentioned that with a lower cost of wages and living. A pre-seed startup could go a long way with family money funding the moonshot.
I think it's sad that as you said even companies with large chunks of money aren't willing to spend in r&d as much as before. I guess a war is needed for that unfortunately.
You're saying Google's long work in AI, Quantum Computing, etc... are not trying to be innovative? Their researchers certainly put out a lot of influential papers.
https://research.google/
it was Google's research that led to the current LLM revolution
Ma Bell was regulated and they used the labs partly as a slush fund to smooth out their apparent earnings. When they got a rate increase, the labs would be well funded for a while, so AT&T wouldn't have to show an unseemly spike in profits. When they went for a long while without an increase, the labs would run lean until the next increase came. At least that's what someone told me back then.
I've read more than one history of bell labs and never heard about this dynamic.
This makes me think of Meta's approach in open sourcing a lot of their AI efforts. I can't find the exact snippet from the Zuckerberg interview, but the reasoning was:
If Meta open sources their models/tools and it gains wide adoption, ways will be found to run the models more efficiently or infrastructure/research built on top of Meta's work will ultimately end up saving them a lot of costs in future. Release the model that cost $10bn to make now, and save yourself billions when others build the tooling to run it at 1/10th the cost.
It rings a bit false when juxtaposed with their $40b spend on the Metaverse…where was the savvy leveraging of the open source community then?
Meanwhile Meta’s competitors commoditise and glean profits from actually-SOTA LLM offerings.
In any case, their hypothesis is testable: which open source innovations from Llama1/2 informed Llama3?
The leverage with the Metaverse was eventually users were supposed to create content which in turn makes the product better and brings more users.
I wonder if we look back at Meta like this which pours ungodly amounts of Money into VR right now.
Probably, given that they've probably done more open research than anyone else (open compute, llama, basically all of the Big Data stuff came from papers by Google and code from Facebook).
They're unlikely to be praised around here, but I think history will potentially be kinder to them.
Tax law would also have to change: https://pro.bloombergtax.com/insights/federal-tax/rd-tax-cre....
Winner winner chicken dinner.
I think, ironically, the "Tax Cuts and Jobs Act of 2017" has gutted tech jobs and R&D in a profound way.
The act actually took us back by many decades in terms of R&D incentives, and devastated US competitiveness vs China by disincentivizing R&D across the board.
Thank you - that's a spot on answer. Bell Labs was the only way Ma Bell could stay relevant given that the rest of the world was making progress on their own.
Perhaps you mean the only way they could maintain their monopoly? Bell Labs was Bell's bargaining chip with the government to allow it maintain its market dominance. The government allowed, at least for a long time, the monopoly to persist so long as Bell Labs was acting as a public good. As part of that social contract, Bell was not allowed to capitalize on what Bell Labs produced. I expect Bell Labs is so notable because the public at large was able to take their discoveries and turn them into productive ventures, to which they did. That's not completely unheard of today – LLM-based businesses being a recent example that stem from Google opening up research to the public – but it is unusual for today's research labs to give away everything. Today, not even the research labs with that explicit intent (OpenAI) are willing to give away everything (or much of anything).
In more ways than one. Sandia National Laboratories (mentioned in the article) is a national research laboratory, and since it was managed by AT&T from its inception in 1948 it was organized like Bell Labs. It's no longer managed by AT&T and its culture today feels a lot less like Bell Labs, but in the early days the employees called it "Bell Labs West." Sandia is the only national lab that has an AT&T origin story.
The evidence pointing toward monopoly is IBM TJ Watson Research and Xerox Parc.
It would require The Phone Company.
https://www.youtube.com/watch?v=CHgUN_95UAw
https://www.youtube.com/watch?v=9jEiTxg21q8