return to table of content

TypeSpec: A new language for API-centric development

darylteo
14 replies
5d7h

Spec <-- api code is far superior to Spec --> api code, imo.

Feels like it's going backwards - there's really no reason why it has to be a .tsp, instead of a .ts with actual api code. It's even using @annotations. In fact the annotations i see in the screenshot (@route, @query, @path) are practically the same in NestJS.

I feel that we should be focusing on enhancing that paradigm instead. In fact I already have a working POC of NestJS -> OpenAPI -> Client libraries so I see no place for this. The spec itself is simply a vehicle for code generation, and serves little purpose otherwise and I'd be happy to be rid of it.

angra_mainyu
5 replies
5d7h

Basically my PoV. The API code itself is the best possible documentation.

Not to mention, how else do you see what complex logic might happen in an endpoint?

It seems typespec deals only with extremely simple CRUD APIs, for which again just reading the code would be good enough.

In scenarios where you want to offer the API consuming team some mock, I'd argue time would be better spent providing them with a a json-server implementation (see: https://www.npmjs.com/package/json-server).

nicce
4 replies
5d6h

If you can define single spec and autogenerate everything (client/server/OpenAPI et. al.), then spec first is superior.

Looking at https://smithy.io/2.0/index.html which can already generate much more than TypeSpec based on the docs and awesome list.

angra_mainyu
2 replies
5d5h

I'm not sure how I can autogenerate non-trivial logic, which is my point.

How would you handle a typical case where based on request data an endpoint fetches some additional information from various sources and depending on that performs several different actions?

This is the most common scenario I've encountered outside of extremely trivial CRUD endpoints.

EDIT: don't get me wrong, I'm not being purposefully obtuse - I was a big swagger advocate back in the day, however over time I came to realize that effort was much better invested in writing clear API code and possibly a mock impl like json-server.

shepherdjerred
0 replies
5d

Smithy is _very_ similar to Coral (an internal library within Amazon).

Coral is used by every AWS service -- every AWS service defines their APIs using Coral, and if you want to interact with another service you use a generated client.

The client can generally be used as-is, though sometimes you might want some custom client code. In the case of a generated client you can just grab the API definition of the API you want to call, generate the client in the language of your choice, and... that's it!

For the server, you still have to implement the logic of how to form the responses, but what the request/responses look like it enforced by the framework.

Here's an example: https://gist.github.com/shepherdjerred/cb041ccc2b8864276e9b1...

I'm leaving out a _lot_ of details. Coral is incredibly powerful and can do a lot of things for you for free. Smithy is, from what I can see, the same idea.

nicce
0 replies
5d4h

I am still trialing Smithy, but as far as I understand, the code which is generated by Smithy, generates suitable abstractions and you never modify this code yourself.

It leaves the middleware library selection for the user, and with middleware you can do whatever more complex operations you need.

TypeScript server overview: https://smithy.io/2.0/ts-ssdk/introduction.html

This execution flow allows the service developer to choose not only their endpoint, but the programming model of their service. For instance, all of the shim conversion and handler invocation can be refactored into a convenience method, or the service developer could choose to incorporate their favorite open source middleware library, of which the server SDK would simply be one layer. It also allows open-ended request preprocessing and response postprocessing to happen independent of Smithy. For instance, a developer could add support for request or response compression, or a custom authentication and authorization framework could be plugged into the application before the server SDK is invoked, without having to fight against a more heavyweight abstraction.

The Service type itself also seems to make it possible to define quite complex logic: https://smithy.io/2.0/spec/service-types.html

darylteo
0 replies
5d5h

Hmm. So as I understand it, it generates handler definitions which you then implement for typesafety but routing/hooking up is an implementation detail?

catlifeonmars
3 replies
5d7h

What if you need to generate two server implementations (possibly in different languages) that adhere to the same specification?

You don’t always go server -> spec -> client.

darylteo
2 replies
5d7h

That seems like a rare use case. Surely we optimise for the general use case - you start working on a new server powered platform, and write it once in your stack of choice, and release libraries for clients (mobile apps, integrations, web apps).

Even so, a e2e test suite would surely serve far more utility over a spec that simply stubs out endpoints with no functionality.

vundercind
0 replies
5d6h

It is not at all uncommon to want a spec for an api that’s out of your control. Or to want to define shared data structures at the transport layer which may have “owners” in heterogenous languages. Spec languages are (well, can be—so very many are terrible) very nice in those cases, which are not at all rare. They may well be more common than the “we’re writing a new single-language monolith that is free to control the shape of the data it slings” use case.

BillyTheKing
0 replies
5d6h

for internally consumed APIs I somewhat agree, but if you wanna expose APIs to external developers it usually does pay off to spend a bit of time on the API design itself, what APIs would you expect to see if you were the user of the your own service - and this is where tools like this really help imo - you could even preview your API in some openapi doc tool like readme or similar, in which case it's like previewing your product before releasing it

ivan_gammel
2 replies
5d7h

Spec-first approach works better when client and server teams work on their parts simultaneously and need some contract before the implementation starts.

darylteo
1 replies
5d7h

I can see the appeal there, but I can only imagine it's utility diminishes quickly over time as product evolves, and probably doesn't survive past implementation kickoff. Last thing developers love to do is to have to update a spec after having to update multiple tests and server code.

If it does become a long lived artifact, CI/CD must also be a nightmare, having to figure out which commit matches which version of the specification, since the spec is now a distinct artifact from it's documented target, and similar which version of the client. A literal "3 body problem".

On the other hand, if you already have a project template (granted, you do need to fight through all of the various build-time configuration required to get a typescript project up and running) you could probably achieve the same by simply stubbing the API endpoints in code to generate the spec.

If there was an advantage to a spec first model, it would be that any change to the api would be a conscious change, and highly visible. I've also encountered situations where a innocuous refactor (changing a class name or method name) broke the previous builds. But one could potentially integrate a gate into CI/CD by diffing the outputs of generated specs.

Much of my opinion on this subject is based on my own experience using Postman as a pre-implementation spec. But conceptually I see the same problems arising from any spec-first approaches.

ivan_gammel
0 replies
5d6h

You describe problems that were solved long time ago and not even in IT. Spec-first approach works fine in the long run, it just requires a bit more process maturity than you can find in a typical startup.

For example, the problem of matching commits with specs doesn’t even exist in environments without continuous deployment (which is rarely a real necessity and often is even undesirable). You just tag your releases in VCS (can be easily automated) and track their scope in documentation (job of responsible product and engineering managers which know what goes live and when).

DanielHB
0 replies
5d5h

the whole point of a DSL for APIs is so it can interop with different languages, frameworks and toolchains. Sure if you all your services are NestJS then you don't need this.

verdverm
9 replies
5d15h

Bespoke languages are a hard sell and incur significant extra effort on the team building this.

1. People don't want to learn bespoke languages.

2. You have to build all the ecosystem tools for a language (compiler, docs, language-server, IDE integrations, dependency management)

Similar endeavors are WaspLang and DarkLang, which I have yet to see in the wild or (meaningfully) on HN. Better to use an existing language and focus on the value add.

I personally built something with similar value add (source of truth -> all the things). I've been through some of these pain points myself.

https://github.com/hofstadter-io/hof

The idea is CUE + text/template = <all the things> (not limited to APIs)

BillyTheKing
3 replies
5d14h

It's this or writing openapi ymls... Even for people who know yml picking this up to define and write basic openapi definitions is much simpler than writing an openapi doc from hand, which is really painful

pattycakes
2 replies
5d12h

FastApi can generate the OpenApi yaml

BillyTheKing
1 replies
5d11h

not everyone writes Python, and not everyone starts code-first

verdverm
0 replies
5d2h

There is a framework in every language I've worked with that will generate the OpenAPI schema for you. That you have to sketch the API in a language is not necessarily "code-first", it's just a different language than yaml/json, (go,py,js) without the implementation, just write the same types in example

cjonas
2 replies
5d15h

New languages are at an even bigger disadvantage now with the rise of generative AI programming. A fancy new framework that is objectively better may actually be less productive because AI haven't been trained on it

verdverm
1 replies
5d14h

There are some AI companies focusing on chatbots for developer projects that do better than using a general purpose LLM.

I think this will become more common and not really a barrier

cjonas
0 replies
2d4h

All the game changing AI programming I've seen is coming from LLM... Regardless, there will always be a lack of training context for newer languages or frameworks.

lifty
1 replies
5d7h

Thanks for hof! Thinking of using it for a project. In what state is the project? I noticed that commits slowed down lately and I was wondering if you consider it stable at the moment and can be used as is.

verdverm
0 replies
5d2h

You're welcome

I've been working on some AI related stuff lately, part of the reason for the slowdown. And actually using hof myself for real work

Code gen is pretty stable. I've been meaning to fix the TUI keybindings on mac before releasing the current beta. Was also hoping the the evaluator improvements would land upstream, but that hasn't happened yet...

I'll take a stab at releasing a new version this weekend, per your inspiration

dexwiz
8 replies
5d15h

So this is coming from Microsoft. I assume it’s going to be their answer to graphql. If the project is dogfooded internally, the tools may actually be half decent, compared to whatever an open source consortium cobbles together. Not sold yet, but might gain more traction.

bterlson
7 replies
5d14h

(I work on the team)

I wouldn't say that TypeSpec is like GraphQL, so it would be hard for TypeSpec to become that on its own. GraphQL has a lot of opinions that are required in order to build a concrete application (protocols, error handling, query semantics, etc.), whereas TypeSpec at its core is just an unopinionated DSL for describing APIs. Bindings for particular protocols are added via libraries, and a GraphQL library is something we have long considered.

So in short, if Microsoft invented a set of opinions that solved similar scenarios to GraphQL, it might use TypeSpec as the API description language in that context, but it wouldn't be fair to equate those opinions with TypeSpec itself.

la64710
2 replies
5d8h

It’s probably more like smithy?

catlifeonmars
1 replies
5d7h

That’s what I thought when I saw this: smithy without the need to bring Gradle into your project

sa-code
1 replies
5d13h

Would you say it's an alternative to Open API?

bterlson
0 replies
5d13h

I think it can be, but it can also be used with OpenAPI to great effect as well. We're not trying to replace OpenAPI, OpenAPI is great in many ways and is useful for many people. In general we believe strongly in being interoperable with the existing API description ecosystem.

nfw2
1 replies
5d14h

The graphql spec does include a DSL to describe the api though, so this is similar to that specific piece. The DSL powers a lot of what people like about grapqhql, like auto-generating a client sdk with type safety. This library does seem to cover a subset of the graphql benefits that aren’t baked into REST by default.

bterlson
0 replies
5d14h

Yup, similar to that specific piece, and I definitely agree that GraphQL's DSL shows how much the DX of the description language itself matters, and how codegen is a productivity multiplier. I think gRPC also demonstrates this. You can think of TypeSpec as an attempt to bring these benefits to any protocol, schema vocabulary, or programming language.

Splizard
8 replies
5d15h

Why create a new language, rather than use an established programming language like Go where you can actually write an implementation too?

skybrian
6 replies
5d15h

Schemas that support multiple languages are useful when you actually use more than one language. This is more common in large organizations and between organizations. But it might also happen if you have code on multiple platforms, for example for mobile apps.

bterlson
5 replies
5d14h

Moreover, compiling an IDL to N languages is substantially easier than compiling implementation code across N languages, especially when generating idiomatic code is a requirement. A language purpose-built for this task is going to produce better results while having substantially lower complexity.

(My $0.02 as someone who works on TypeSpec)

jimbobimbo
2 replies
5d13h

Sorry, could you elaborate? If I'm creating an API using, say, ASP.NET Core or Go, I can generate OpenAPI spec out of actual implementation. How this "IDL" fits into the workflow? Is this another output in addition to OpenAPI spec?

bterlson
0 replies
5d13h

TypeSpec is designed primarily as an API first tool as opposed to being an output. In the context of ASP.NET and HTTP/REST APIs, our goal is that you can write your spec and generate much of the service implementation and clients. From this same source of truth you could also emit clients or service implementations in other API definition formats like OpenAPI, schemas like JSON Schema, and other things besides.

BillyTheKing
0 replies
5d13h

In my (limited) experience so far with TypeSpec - it really shines in an API first approach, so you define your API before you implement it, but not so much the other way around.

seanmcdirmid
1 replies
5d13h

I would have expected a bit more than type specifications, maybe some behavior specifications also? Something like Daan’s type states. But I get why we are still splitting hairs over data types.

waern
0 replies
5d2h

I've been working on an API spec language where state changes can be modeled with linear logic: https://apilog.net/docs/why/ It doesn't have "schemas" yet though. Which may seem odd given they are a crucial part of this type of languages. :-) But it is because I am experimenting with different designs on that front.

neonsunset
0 replies
5d12h

Because Go is a bad language, and imposing the requirement to use it on the teams is absurd.

usrusr
7 replies
5d10h

Feels like cheating, next to the yaml of openapi anything will look good. And that's all while I'm still considering openapi one of the best things that have happened.

But I've also been kind of holding my breath for typescript making it's breakthrough as a schema language. More specifically its surprisingly big non-imperative, non-functional subset, obviously. And at first glance this seems to be exactly this, "what if we removed javascript from typescript so that only typing for JSON remains and added some endpoint metadata description to make it a viable successor to openapi and wsdl.

vbezhenar
4 replies
5d10h

TypeScript type system is very advanced. It won't be possible to generate corresponding bindings for all popular languages, while keeping them idiomatic. I'd prefer API language to be very simple and straightforward.

madeofpalk
1 replies
5d7h

OpenAPI's 'type system' is surprisingly advanced also, supporting explicitly discriminated unions and other things like that, which doesn't model well into all other languages.

brabel
0 replies
5d6h

Yep and that comes from JSON Schema: https://json-schema.org/

I believe recent versions of OpenAPI are "compatible" with JSON Schema (at least they "wanted to be" last I checked as I was implementing some schema converters).

Even TypeScript is not enough to represent all of JSON Schema! But it gets close (perhaps if you remove validation rules and stuff like that it's a full match).

But even something like Java can represent most of it pretty well, specially since sealed interfaces were added. I know because I've done it :).

usrusr
0 replies
5d7h

Many things that can't be expressed in a given type system can still be expressed quite nicely in code generated for the domain data. You might experience some name elements getting forcibly moved from the types universe into accessor method names. This is state representation (hopefully!), not the CORBA-era's pipe dream of magically remoting arbitrary objects across space and languages.

If for some reason your problem does involve tapping the depth of typescript type expressivity (the elaborate rule systems expressed in maplibre style JSON come to mind?), you'd better have the closest approximation you can get on the other end of the line.

jayd16
0 replies
5d3h

So what happens if a server in another language uses that feature? You don't want to be able to represent that?

zaphodias
0 replies
4d11h

it doesn't look worse than OpenAPI's yaml to me

it's fairly concise, the method and the request/response types are well separated and readable

the only thing I could argue with is mixing validation and type defs, as it looks like one of these things that quickly evolve over time and you end up duplicating both in the schema and the business logic

DanielHB
7 replies
5d8h

Would be nice if you could just import those typespec files in typescript (and other languages?) and get automatic typescript types from them.

Codegen is annoying and error prone.

AntonCTO
5 replies
5d8h

+1. It even looks very similar to TypeScript. Why not use TypeScript as a description of APIs in the first place? Get TypeScript types and even generate OpenAPI schema on the fly to serve it at `/openapi`?

BillyTheKing
2 replies
5d6h

yes, the whole world runs on nestjs

DanielHB
0 replies
3d9h

The solutions you are talking about are called "code-first", TypeSpec is a "schema-first" solution. Both solutions have their pros and cons:

Code-first:

- No extra steps between code and running application

- No mismatch between schema and code

- Requires tooling for every language used in your stack. This tooling is usually more complex than schema-first codegen, but it is almost always built-into and core of the backend framework you are using so it tends to be better supported.

- Requires teams to know each service's backend language to propose changes

- Harder to build a coherent central API documentation if you have multiple services. Requires complex tooling for merging the different schemas from the different services

Schema first:

- Schema first means contract-first design, scales better to products with multiple separate teams using multiple languages. It is easier to learn the schema DSL than poke around backend language unknown to the developer

- Any change in the contract requires changes in two places (schema and code)

- API consumers can easily suggest changes to the schema that are implemented by the relevant team

- Usually requires some codegen step for each backend and client languages (with the usual codegen problems). Runtime-only schema validation can be done but it is usually a bad idea to rely only on it.

- Easier mocking. Clients can start implementing before backend is ready, tests can be written against mocks.

There are definitely more pros and cons that I am missing, but it is a tradeoff. I would say if you have multiple backend services and supporting multiple backend languages I would definitely go for schema-first.

You can work with OpenAPI in a schema-first way but as many people have pointed out over the years OpenAPI yaml files are, to be polite, not very human-friendly. TypeSpec seems to be a more sensible way to work with HTTP apis in a schema first way while keep interoperability with existing OpenAPI tooling at the cost of extra codegen step (typespec .tps -> openapi .yaml)

DanielHB
0 replies
5d5h

Typescript is too powerful, there are a lot of typescript constructs that can't be represented in OpenAPI specs. Or that could generate massively complex OpenAPI specs that would bring your tooling performance to a crawl.

A subset of typescript could work, but I imagine it would be fairly confusing to support some features here and other features there.

I think they are going for a minimum common denominator approach and eventually add other targets besides OpenAPI.

kristiandupont
0 replies
5d8h

I generally prefer code generation. I don't see why it is "error prone" any more than anything else is. In fact, it means that errors aren't concealed in two levels of abstraction which can make them much harder to debug. Also, with generated code, you can add a build step to work around short comings or bugs in the generator where as you otherwise have no choice to live with it.

The "annoying" part I do get -- it's obviously nicer to have instant feedback than being forced to rebuild things -- so for things where you iterate a lot, that does weigh in the other direction.

martinn
6 replies
5d11h

Could someone clarify what's the use case of a tool like this please.

Is this something that helps if you, say, are building a new API and will need to create both the server implementation as well as the client implementations in multiple languages? And so, it can automatically do all of that for you based on an API spec? Or is it something different.

Funnily enough, I developed a Python library recently that allows you to build API clients in a way that very closely resembles the TypeSpec example. But I'm pretty sure they are very different things.

wg0
5 replies
5d11h

If you are in a situation where you have a backend and you want to expose an API and then you would eventually want a client, you would need format specs as the starting point where server and clients are generated from that one source.

At the moment, OpenAPI with YAML is the only way to go but you can't easily split the spec into separate files as you would do any program with packages, modules and what not.

There are third party tools[0] which are archived and the libraries they depend upon are up for adoption.

In that space, either you can use something like cue language 1] or something like TypeSpec which is purpose built for this so yet, this seems like a great tool although I have not tried it yet myself.

[0]. https://github.com/APIDevTools/swagger-cli

[1]. https://cuelang.org/

EDIT: formating

monkfish328
1 replies
5d10h

So just to clarify, how would one go about auto-generating a stub handler for these route definitions?

martinn
1 replies
5d10h

Thanks, that makes sense for that use case.

My question is probably a more general one around the use case for writing an API spec (in a format like OpenAPI or TypeSpec), and then translating and writing the server implementation to match that. As opposed to being able to create the API spec automatically based on server implementation (and being able to easily refresh it).

Understand that writing the spec and then the server implementation seems to have some benefits. I'm curious to hear about the common use cases for it, as in my mind I could quickly stub a server implementation (and automatically generate a spec) rather than try to create the spec by hand and then write the server implementation again. But I'm sure there's some other things I'm missing.

wg0
0 replies
5d9h

No, you don't have to write the server stub yourself. You should generate it.

See my comment below to another question [0]

That's the upside that instead of generating specs from comments of methods, you actually generate the methods from the formally verified/linted and crafted spec.

[0]. https://news.ycombinator.com/item?id=40208847

favflam
6 replies
5d15h

What is wrong with protobufs and grpc?

_ZeD_
3 replies
5d14h

Both of them. Both of them are wrong.

vundercind
2 replies
5d6h

When one starts gluing together a lot of data pipelines full of JSON trash and from all kinds of systems with incompatible data types (whether or not “lol what’s an integer?” JSON is involved), one quickly comes to appreciate why things like Protobuf exist and look the way they do.

ramesh31
1 replies
5d2h

When one starts gluing together a lot of data pipelines full of JSON trash and from all kinds of systems with incompatible data types (whether or not “lol what’s an integer?” JSON is involved), one quickly comes to appreciate why things like Protobuf exist and look the way they do.

And then one proceeds to spend days trying to mash that JSON mess into a protobuff and debugging segfaults, rather than just getting the job done with HTTP.

vundercind
0 replies
5d

At least it’s confined to the edges.

frankrobert
0 replies
5d6h

The protobuf v3 spec doesn't support required fields anymore and also mandates field ordering. In my opinion, both of these are deal breakers for generating types and typed clients for your FE environment (or any consuming application). Your entire schema would be Partial<T> on every field. It defeats the purpose of the type safety to me

62951413
0 replies
4d23h

CORBA, Avro RPC, Thrift RPC, gRPC, now this. In this industry each generation wants to re-invent IDL every decade. But anything is better then JSON over HTTP so why not this.

pjmlp
5 replies
5d9h

At Microsoft, we believe in the value of using our own products, a practice often referred to as "dogfooding".

One would think, unfortunely that is not how it looks like in the zoo of native Windows desktop development, or Xamarin/MAUI adoption in their mobile apps.

Savageman
2 replies
5d9h

It must be a new policy, or maybe only for this product (which makes good marketing).

I've used Microsoft Graph to manage emails, and I'd be very surprised if they use if for Outlook...

gpderetta
0 replies
5d4h

dogfooding has been a thing at MS probably since its inception. Of course it doesn't mean it is always practiced fully.

madeofpalk
0 replies
5d7h

Windows is dog fooding _all_ the native app development options simultaneously.

jayd16
0 replies
5d3h

Dogfooding doesn't mean write everything from scratch as soon as something new comes up.

vbezhenar
4 replies
5d9h

I failed to find an answer to the main question: what output languages are supported. The only way is to emit OpenAPI and then using one of their terrible generators?

JoyrexJ9
3 replies
5d8h

You can create your own emitters, there's info in the docs on how to do so. My team built a custom TypeSpec emitter to output a SDK and set of libraries

bterlson
0 replies
5d1h

There are a few emitters in our standard library - OpenAPI 3.0, JSON Schema 2020-12, and Protobuf. REST client and service emitters for a few languages are coming online now and should be ready in the next couple months.

marioguerra
0 replies
4d4h

I'm the TypeSpec PM at Microsoft and I'd like to learn more about your use case and experience building a custom emitter. Are you willing to chat about it? If so, what's the best way for me to reach you?

vbezhenar
1 replies
5d9h

WSDL issue was that it was designed by a committee of several huge companies. So it was inconsistent and bloated. Same could be said about many XML-related standards.

Nowadays big companies rarely work together and prefer to throw their own solutions to the market, hoping to capture it. That results in a higher quality approaches, because it's developed by a single team and focused on a single goal, rather than trying to please 10 vendors with their own agendas.

tannhaeuser
0 replies
5d8h

SOAP was designed as an object-access protocol and released as XML-RPC in June 1998 as part of Frontier 5.1 by Dave Winer, Don Box, Bob Atkinson, and Mohsen Al-Ghosein for Microsoft, where Atkinson and Al-Ghosein were working. The specification was not made available until it was submitted to IETF 13 September 1999. [1]

WSDL 1.0's list of editors reads [2]:

Erik Christensen, Microsoft; Francisco Curbera, IBM; Greg Meredith, Microsoft; Sanjiva Weerawarana, IBM

IOW, TypeScript is by the same company as SOAP and WSDL.

Nowadays big companies rarely work together [...] That results in a higher quality approaches

[Citation needed]

[1]: <https://en.wikipedia.org/wiki/SOAP>

[2]: <http://xml.coverpages.org/wsdl20000929.html>

paulddraper
0 replies
5d12h

You may already know this but:

1. A more exact analogy would be WSDL+SOAP.

2. WSDL and SOAP are defined in XML, and SOAP describes XML.

3. The popularity of these technologies followed the popularity (both rise and decline) of XML generally.

4. TypeSpec describes JSON and protobuf, and will likely also lose popularity if those formats do.

dexwiz
0 replies
5d15h

I still use WSDLs, or rather the platform I work on does. Maybe not popular for new tech but they are still alive. Hate me, but I’d rather have generated xml than generate yaml.

ActionHank
1 replies
5d5h

Yeah, but this one is better because new.

I might be wrong, but I suspect that the crazy hype-driven-development has started to move on from frontend to backend.

blowski
0 replies
5d5h

The backend has definitely suffered from "crazy hype-driven development" for the last 30 years. Perl, Python, Ruby, Java, PHP, Scala, Clojure, Go, Rust have all had their brief moment as the silver bullet. Not to mention ops tooling - Vagrant, Docker, Kubernetes, Puppet, Chef, Ansible.

I wasn't alive in the 1970s, but I'm guessing that those who were would say it was just as faddish then as well.

vaylian
0 replies
5d3h

Not quite. pkl is a language that is mostly designed for parsing and serialization of data. TypeSpec is a language that is designed to describe APIs and the structure of data that those APIs take. You can actually combine the two technologies as follows:

1. Read a .pkl file from disk and generate (for example) a Person struct with a first, last name and an age value.

2. Let's say that according to some TypeSpec, the HTTP endpoint /greet accepts a POST request with a JSON payload containing a first and a last name. You convert your Person struct into a JSON literal (and drop the age field in the process) and send it to the HTTP endpoint

3. You should receive a greeting from the HTTP endpoint as a response. The TypeSpec may define a greeting as a structure that contains the fields "message" and "language".

4. You can then use pkl to write that greeting structure to disk.

Sidenote: pkl also supports methods[1] but in almost all use cases you only use pkl to define which fields your data has. TypeSpec cares most about the methods/endpoints. Of course, you still need to define the shape of the arguments to these methods and that is where you have a bit of overlap between these two technologies. I can imagine that you would generate pkl definitons from TypeSpec definitions.

[1] https://pkl-lang.org/main/current/language-reference/index.h...

dbrower
2 replies
5d14h

Will it translate to yaml for toolchains that want that?

I'd be delighted to have a high-level IDL that gave the same sort of thing that CORBA IDL gave us 25 years ago -- schema and stub generation for multiple languages.

mnahkies
0 replies
5d12h

I added support for typespec as an input specification to my openapi 3 based code generator a couple of days ago.

They provide an API to convert to openapi documents so it was pretty painless (https://github.com/mnahkies/openapi-code-generator/pull/158)

My focus is on doing both client sdk and server stub generation, though only typescript so far - will hopefully add other languages eventually, when I'm satisfied with the completeness of the typescript templates.

bterlson
0 replies
5d14h

The OpenAPI and Json Schema emitters can produce yaml.

kkukshtel
1 replies
5d3h

I wish any of Typespec, Cue, Pkl, Dhall, etc. would just implement their core functionality in C with an ABI for other language bindings. Needing to use whatever dep they decided to operate with as part of adoption of the language itself is a big ask. I want to try out your config language, I don't want/need all of node to make this happen.

verdverm
0 replies
5d2h

CUE is building out language bindings for various languages right now

Since CUE is written in Go, you can output a .so that is then used like your C based desire, if I understand you correctly

joelwilsson
1 replies
5d12h

Looks like a competitor/alternative to Smithy, https://smithy.io/2.0/index.html. Since at least one person from the TypeSpec team is here, do you have any thoughts on how they compare?

jen20
0 replies
5d4h

This was my thought too - since smithy is already out there and used in a similar domain, it would be useful to have a comparison. “Doesn’t have Kotlin and Gradle all over the show” seems like a significant advantage in favour of TypeSpec.

xiaolin
0 replies
5d9h

How about ConnectRPC + grpc-gateway?

stapert
0 replies
5d11h

Looks interesting, but what would be the advantage of this over just writting an openAPI specification? It's more concise, but currently this would require you to increase your toolchain to go from TypeSpec to openAPI to generating code.

Any plans to add code generatio to this project?

schnable
0 replies
5d7h

Does TypeSpec work well with asynchronous and event-driven APIs?

mbrock
0 replies
5d3h

the number of times this document says "not just ... but ..." and "revolutionize" is so obviously GPT-4

leetrout
0 replies
5d16h

I love the idea of tools like this. I have looked at Buf and Fern and am curious to try this as well.

hoppersftw
0 replies
5d1h

lol, about 5 years too late. I've been developing REST APIs with Ballerina for some time now and there's a multitude of quality of life improvements for me. Look at this https://ballerina.io/learn/write-a-restful-api-with-ballerin...

I guess it's NIH for Microsoft. Oh well, at least it's TypeScript.

duped
0 replies
5d2h

How does this compare to MIDL?

domoritz
0 replies
5d6h

This looks interesting but I already have TypeScript types for my APIs so I developed https://github.com/vega/ts-json-schema-generator which lets me generate JSON schema from the sources directly. Yes, it does have some oddities because the two languages have slightly different feature sets but it’s been working well for us for a few years. If I didn’t have TypeScript or a smaller API surface I’d be okay with typing again I would look at TypeSpec though. It definitely beats writing JSON schema by hand.

danhudlow
0 replies
4d17h

The toy example with an API definition that includes zero semantic documentation doesn’t give me a lot of confidence that TypeSpec helps author API definitions that are actually good. It’s easy to create a concise language if all you want to generate is boilerplate.

aleksiy123
0 replies
5d14h

Has validations thats awesome.

I have a project in mind and was looking for something like this. Closest I found was CueLang.

Now just need to find the time...

BillyTheKing
0 replies
5d14h

I've been using it for my latest API - I was looking for a tool that allowed me to describe APIs similarly to GraphQL and in a design-first sorta way. All these openapi editors just felt crazy clunky and made data-relationship within the API non-obvious. TypeSpec is a great tool, really helped me out here - was exactly what I was looking for!