Tuesday, February 27, 2024
Google search engine
HomeUncategorizedA Critique of the Cap'n Proto Schema Language (2019)

A Critique of the Cap’n Proto Schema Language (2019)

Recently on the Cap’n Proto mailing list, someone asked for
feedback on an initial design for an Elm implementation. I chimed in
with some advice and some lessons learned from writing the Haskell
implementation and working with the Go implementation. One thing I
mentioned was this:

I suggest optimizing for “well-written” schema, and making peace
with the fact that certain constructs may generate unpleasant APIs.

David Renshaw asked for details on what bits were difficult to map. I
talked a bit about difficulties with nested namespaces, and mentioned
that I had a longer critique in mind. This blog post is that critique.

It’s mostly a critique of the schema language itself – not the wire
format, not the RPC protocol. There are things worth saying about those,
but to keep things at least somewhat focused, I’m just going to stick
to the schema language proper.

Even so, this is one of my longer posts; get comfortable.

Some of the critiques below have some motivating design principles,
which I’ll call out up front:

  • Design for the call site. When you’re looking at a definition, you
    have the benefit of having the documentation right there. When you’re
    looking at a use of something that has been defined
    elsewhere, if
    the name, order of arguments, etc. doesn’t make it clear what’s going
    on, you have to go look something up to understand it. Also, you only
    have one place where a thing is defined, but many where it is used —
    so it’s more important to optimize for the latter.

    For a schema language like Cap’n Proto, the schema itself is by
    definition not the call site — so we should be suspicious of any
    features that make the schema language more ergonomic at the expense
    of code using the schema.

  • Make things easy on the code generator. Some of my suggestions
    might lead a reader to say “but you can just not use those features.”
    The trouble with that is that a code generator plugin still has to
    worry about them. Having to deal with the general case can make it
    harder to generate nice APIs for “good” schema. When writing the Haskell
    implementation, I spent more time than I care to think about figuring
    out how to handle what I consider to be misfeatures.

    Finding a good mapping from one set into another is easiest if the
    input is much smaller than the output. The more restrictions you can
    make to the schema language the easier it gets to map those to a
    programming language. Thus, unnecessary features are especially
    harmful in this context.

  • Little things add up. While a lot of the things in this post may seem
    nitpicky, the cumulative effect they have is actually a big deal.

In the Cap’n Proto schema language, each struct or interface definition
introduces its own namespace. So you can define:

struct Foo {
    struct Bar {

struct Baz {
    struct Bar {

Which in C++, will generate types Foo, Foo::Bar, Baz, and
Baz::Bar. Rust and OCaml both do something similar.

In languages without any kind of intra-module namespaces, code
generators usually come up with some kind of naming convention as a
substitute. Both the C and Go implementations use underscores to
separate name space segments, e.g. Foo_Bar. The Haskell implementation
uses the single quote, like Foo'Bar.

The length of identifiers like this gets out of hand quickly. Here’s a
particularly nasty example, from the output of the Go code generator on
Sandstorm’s web-session.capnp:


An alternative idea is to generate multiple modules, but this can introduce
cyclic dependencies, and the implementation is much more complex at best.
In some cases it is actually impossible.

Furthermore, as Kenton pointed out, having things properly namespaced in
the target language doesn’t actually make this much nicer. He provided
this example line of code from Cloudflare Workers:

case PipelineDef::Stage::Worker::Global::Value::JSON:

Nested namespaces violate the “optimize for the call site” principle;
they are sometimes convenient at the definition site, but lead to APIs
that are annoying to work with in actual code. I generally recommend just
not using nested namespaces in schema code, and consider them to be a

Relatedly, there are a few constructs which the schema author doesn’t have
to give a name to, but which (for most backends) generate named entities:

  • Anonymous unions
  • Groups
  • Method parameter & result lists

All of these things feel quite natural to work with at the schema level,
but then the code generator is left having to come up with a name for
you, and normally it doesn’t have a heck of a lot to work with.

Anonymous Unions

Cap’n Proto includes support for tagged unions/variants. However, unlike
the variants in ML-family languages (e.g. Haskell, OCaml, Elm), Cap’n
Proto unions are not first class types. Instead, they are fields of
structs, which can also be “anonymous.”

To someone coming from an ML-like language, The fact that Cap’n Proto’s
unions aren’t first class types is a bit odd. The official docs make a
good justification for this, and I think it is a-priori a reasonable
design. Nonetheless, the fact that the feature maps poorly to existing
languages is a real downside.

I have a suggestion on how to make this better, which I think avoids
losing the good points of the current design. Split the use of unions
into two cases:

  1. Structs which are entirely one big union.
  2. Structs which contain fields outside the union.

The first case is very common, and would be worth making more ergonomic.
I would suggest that rather than writing:

struct Foo {
    union {

You should just be able to write:

union Foo {

In the second case I would disallow anonymous unions – if there are
other fields, you have to give them a name.

You could still add common fields to a top-level union; just change it
to a struct, move all of the existing variants into a (named) union
field, and add your fields per usual.

In languages where it is most natural, this would allow unions to cleanly
map to variants, and structs to records. The code generator would also
not have to invent field names for anonymous unions, and you don’t lose
any of the advantages of the current approach.

The Haskell implementation currently looks for structs which are one big
anonymous union so that it can drop the outer wrapper. By itself this
isn’t too complicated, but it was yet another thing, and they add up.


As the official documentation describes, groups are primarily useful in
conjunction with unions. They allow more than one field as part of the
same union variant, and they can be a little more self documenting,
since the members of that variant can be given names.

But stand-alone groups are far less compelling, and present challenges
for code generators. They’re another entity that (usually) needs to have
a name generated for it, and this is again going to end up being longer
than ideal.

I think coupling groups to unions would make this a better. There was
actually a surprising amount of complexity in avoiding name collisions
between the full generality of groups and anonymous unions.

Parameters And Results

The huge Go identifier mentioned earlier comes from this (comparatively
reasonable) schema definition:

interface WebSession @0xa50711a14d35a8ce extends(Grain.UiSession) {
    # ...
    interface WebSocketStream {
        sendBytes @0 (message :Data);
        # ...

At the definition site, we just name the method and its arguments (and
results, if any). But at the protocol level, arguments and results are
their own structs, and the schema compiler generates a struct type for
each parameter list, and each result list. The name
websession.WebSession_WebSocketStream_sendBytes_Results_Future refers
to something that doesn’t even carry any data – sendBytes has no

It’s possible to instead define the parameter & result types explicitly,

interface WebSocketStream {
    sendBytes @0 SendBytesReq -> SendBytesResp;

struct SendBytesReq {
    message :Data;

struct SendBytesResp {

It’s tempting to argue that in this case, you don’t gain much – the
name I’ve chosen for the struct is no more informative that the
auto-generated one, and the schema is much more verbose. It’s not like
I picked an especially lousy example, either; most of the grpc examples
I’ve seen (where this style is required) end up with similarly boring

But forcing the schema author to do this also forces them to deal with
the namespacing problem themselves, so the code generator can drop not
one but two levels of nesting. In the Haskell implementation this
makes the type names significantly shorter; in the original you have:

data WebSocketStream'sendBytes'params = WebSocketStream'sendBytes'params
    { message :: Data

whereas with the separately declared struct you get:

data SendBytesReq = SendBytesReq
    { message :: Data

The resulting identifier is less than half the length of the original.

Cap’n Proto currently uses case to distinguish between types and
constants/annotations, but some programming languages (including both
Haskell and Go) use case to distinguish other things. This can cause
trouble when the code generator has to modify a name to conform with
the target language.

For example, in persistent.capnp there is both a type called
Persistent and an annotation called persistent. Go uses capitalization
to distinguish between public and private identifiers, so if left to its
own devices, the Go backend capitalizes persistent, which causes a name
collision. The Haskell implementation tries to be smarter about this, but
it results in significant implementation complexity that could have been
avoided by a little thought during design.

We could solve this problem by having all identifiers use the same case
conventions (standardizing on smallCamelCase), and have them live in
the same namespace, so that anything that might cause a name collision
in the target language will also cause a collision in the schema

Finally something other than names!

Cap’n Proto allows the schema author to define default values for
fields, if they are not otherwise set. So if you write:

struct Foo {
    bar @0 :Int32;
    baz @1 :Text = "hello";

If a message does not have the baz field set, it will be treated as
being the string "hello".

I strongly argue for disallowing these for pointer types (strings,
lists, structs…). They are already rarely used. Between the schema
that ship with Cap’n Proto and the ones in Sandstorm – over nine
thousand lines of schema source – the feature is used exactly twice,
both to set the default value of a parameter to the empty string.

For the Haskell implementation, this feature actually introduced a
significant amount of complexity – enough that, given how infrequently
the feature is actually used – I just decided not to support it. The
Haskell implementation just prints a warning to the console and
ignores custom defaults for pointer fields.

The usual way to implement this is to modify the message on first
access, which for immutable messages doesn’t work. I considered just
returning a constant in the immutable case, but its storage would be in
a different message than the pointer the user followed to it, which
could lead to surprises if the user tries to fish out the underlying
segment from the new value. There’s probably a decent design to be found
here, but after not coming up with a great answer quickly, I eventually
decided it was a waste of time.

Default values for non-pointer fields are much less of an issue – they
are implemented by storing fields xor’ed with their default values, so
the immutability issue doesn’t come up. For Haskell it’s totally fine.

But they do cause a problem for Go when using the pogs package, since by
default Go zeros the memory for structs, so if you have a custom default
value in a Cap’n Proto struct declaration, The default value of the Go
struct will disagree. It’s not an issue when you’re working with
messages in the wire format, since you’re not using Go structs
directly, but when performance permits Go structs are nicer to work

Supposedly this was also the motivation for dropping custom default
values from Protobufs in version 3.

Kenton mentioned on the mailing list that he uses custom defaults for
non-pointer fields “all the time,” particularly booleans. My own sense
was that these were more common as well, but I decided to do some
measurements, again on the core & Sandstorm schema. Here’s what I found:

  • There are 261 fields (including in parameters and return values) of
    non-pointer type declared in all of those schema.
  • Of those, 20 have explicit defaults set.
  • Of those with explicit defaults, 15 of them are just explicitly
    setting the field to the default value for the type – perhaps
    serving as documentation, but not actually having an effect.

This leaves only 5 uses of the feature that actually do anything.
The five cases are:

  • In the core schema’s rpc.capnp, the Return type’s
    releaseParamCaps field is set to true.
  • Likewise, the Finish type’s releaseResultCaps is set to true.
  • In the core schema’s schema.capnp, the Field type’s
    discriminantValue field has its default set to a sentinel value
    declared elsewhere as a constant called noDiscriminant (with a
    value of 0xffff).
  • In sandstorm’s web-session.capnp, WebSession.AcceptedType.qValue
    defaults to 1.0.
  • …as does Websession.AcceptedEncoding.qValue.

The first two could be solved by changing release to retain and
flipping the semantics. Kenton had expressed concern about double-negatives
making things hard to read, and my original intent in doing these
measurements was to collect a list of such fields, pull out a thesaurus,
and try to get a sense of how hard it actually was to avoid that. I was
surprised to find that I didn’t get a sample size big enough to tell.

I had previously been on the fence as to whether this feature was worth
the awkwardness it introduced for Go, but given how little it seems to
be used in reality, I’m more solidly of the opinion that it should have
been left out. Given Kenton’s experience report I have to assume it’s
used more inside Cloudflare, but I no longer believe the feature carries
its weight.

I would make a few changes to the way inheritance of interfaces works in
the schema language:

First, signal an error if an interface declares method with the same name as
a method in one of its parents. Right now, handling this correctly
requires more name spacing logic in the code generator, and I know at
least the Go implementation just punts on this – which is probably the
right thing to do, given that I’ve never seen a schema that actually
does this.

I would also drop multiple inheritance. This would get rid of the diamond
problem, which is a source of API design challenges. One that came up in
the context of the Haskell implementation:

Haskell has good support for message-passing concurrency, and it would
be fairly natural to map the receive end of objects to a channel that you
pull calls out of, so for an interface like this:

interface Foo {
    bar @0 BarReq -> BarResp;
    baz @1 BazReq -> BazResp;

you would get code like:

data Foo'call
    = Foo'Bar BarReq (BarResp -> IO ())
    | Foo'Baz BazReq (BazResp -> IO ())

And then a server would handle these by doing something like:

handleFoo calls = forever $ do
    call <- recv calls
    case call of
        Foo'Bar req reply -> do
            reply returnValue
        Foo'Baz req reply -> do

This is nice for a lot of reasons; it means the library doesn’t have to
care as much abount managing the lifetimes of servers/threads, the API
is small and transparent, it’s easy to have multiple threads service the
same object, or fork off a thread to handle something and reply later,
etc. It affords a lot of flexibility and is fairly simple to do.

Adding single inheritance to this situation isn’t too hard; just add a
super variant to the call type. Say we have:

interface Quux extends (Foo) {
    echo @0 EchoReq -> EchoResp;

Then this becomes:

data Quux'call
    = Quux'Echo EchoReq (EchoResp -> IO ())
    | Quux'super Foo'call

Multiple inheritance gets awkward though; you could add variants
Quux'super'Foo and Quux'super'Bar, but you can now potentially have
more than one path to the same object. It’s probably possible to find a
reasonable solution here, but I don’t think multiple inheritance buys
you enough for it to be worth imposing the burden on implementers (and
users of the resulting API).

I ended up mapping interfaces to type classes instead, but if I were to
do it over again I’d do what I describe above; type classes solve the
diamond problem, but the implementation needs to care about a bunch of
things it otherwise wouldn’t, and the inversion of control makes writing
servers harder.

There are a lot of good things to say about Cap’n Proto. The two big
ones for me:

  • It’s fast. Not just the reference implementation, but even fairly
    naive implementations of the design – I’ve never even bothered to
    benchmark the Haskell implementation, much less gotten around to
    optimizing it, but I’ve nonetheless heard from people who are using
    it because protobufs was too slow.
  • The RPC protocol is expressive in a way that nothing else I’ve played
    with quite matches.

…but all of the incidental difficulties add up to a lot of unnecessary
implementation work, and clumsy APIs. The experience of implementing
Cap’n Proto has really cemented my belief in being ruthless in leaving
things out.

Read More



Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments