Hacker News new | past | comments | ask | show | jobs | submit login
Is GraphQL the Next Frontier for Web APIs? (brandur.org)
85 points by clra on March 31, 2017 | hide | past | favorite | 70 comments



I'm a major REST advocate (spoken at conferences, written about it in books, etc) and I've been using GraphQL for a few months now and after an initial learning curve I love the flexibility.

But in my opinion, the answer is: write code once, expose both interfaces.

Here's what I mean...

I've been writing my APIs as GraphQL first. Then putting a very light wrapper around GraphQL to expose a REST interface. The REST interface is gold standard (HATEOAS, Json-API, etc) yet it is only a few lines of code. I just have a dictionary of REST -> GraphQL mappings and transform the GraphQL result to Json-API.

Then I also expose the GraphQL interface. So users that want to use GraphQL can.

It's a win-win and it didn't take me any more time to develop than just REST. As an added bonus, the GraphQL code is inherently easier to unit test by nature.

GraphQL has a lot of flaws (for one it has nothing to do with graphs -- nearest I can tell the name is only because it was first used for Facebooks graph database) and things like pagination and error handling have no standard way to do them. But it is very flexible.


> I'm a major REST advocate (spoken at conferences, written about it in books, etc) and I've been using GraphQL for a few months now and after an initial learning curve I love the flexibility.

It's good to hear that there are some REST people that are open to potential alternatives :)

> I've been writing my APIs as GraphQL first. Then putting a very light wrapper around GraphQL to expose a REST interface. The REST interface is gold standard (HATEOAS, Json-API, etc) yet it is only a few lines of code. I just have a dictionary of REST -> GraphQL mappings and transform the GraphQL result to Json-API.

I'd certainly agree there. This could change in a year or two, but right now REST-ish APIs are still indisputably dominant. If you want to maximize your developers' experience, it's still too early to _require_ that they use GraphQL because there are going to be a lot of shops out there that are just not set up for it. Of course there is still the option of providing your own libraries and tooling for your users to go through, and if you go with that then the API's implementation is much less important.


If the answer is: "expose both interfaces" then my implementation is exactly it :) GraphQL & REST API for your database (https://subzero.cloud/) At the core sits PostgREST (https://postgrest.com/)


I hope you didn't just throw https://github.com/postgraphql/postgraphql and https://postgrest.com/en/v0.4/ behind some closed source code and called it a business.


Words can hurt you know ... :) https://github.com/begriffs/postgrest/graphs/contributors?fr...

It uses postgrest but not postgraphql, actually not even the graphql-js reference implementation


Why? You don't think managing infrastructure composed of open source components is providing a valuable service?


That of course is valuable but it's not the main thing here. I'd say about 70% of the code powering this entire system is mine.


What is your business model here? Is this (subzero) projected to be an open source product supported by paid services and support, or is it entirely a cloud offering a la graphcool, parse or firebase?


It's going to be commercial, possible with access to source. You could run it on your infrastructure or have it hosted. Sort of like atlassian


No reason it couldn't be a mix of both.


> for one it has nothing to do with graphs

It can readily convey graph information, e.g. what are the first and last names of the friends of the friends of user 123? But yes, there's nothing really graph specific.

If it bothers you that "Graph Query Language" is too specific, just remember that "Structured Query Language" is too general :)


I can see three contenders:

1. CRDTs/sync solutions (like Firebase)

2. GraphQL with real-time subscriptions

3. A variant of REST over WebSockets with real-time subscriptions

I don't think #1 will be able to cover all necessary use cases because it is much more computationally expensive (in terms of memory, CPU and bandwidth). Both #2 and #3 are good solutions but I think #3 will prevail in the end because it's a lot simpler.

You can break-up RESTful resources into individual fields which each have their own independent real-time subscriptions (so each observable model on the front end can hold a single property of a resource). With GraphQL, you may end up with overlapping fields between different 'views' and it makes it harder to track/manage subscriptions.


This list is very interesting!

Which one performs the best in:

• Raw Performance

• Proven Architecture

• Robustness & Congestion Tolerance

• Backward & forward compatibility

• Customizability

Is ASN.1 by Fabrice Bellard the fastest serialization format? http://bellard.org/ffasn1/

Here's a lits of all formats: https://en.wikipedia.org/wiki/Comparison_of_data_serializati...


GraphQL and REST solutions will almost certainly be much faster than any CRDT solution. I think that the REST solution should be more efficient than the GraphQL solution but it probably won't be a drastic difference.

The essence of the REST philosophy is to deal with data as small atomic building blocks - I think that this is one of the most important and well-tested ideas in software engineering so on that basis, I would go with REST. Though you could also argue that the idea of querying (which is what GraphQL essentially is) is also well tested.

CRDTs are bad with bandwidth and they require very heavy/complex clients that are difficult to implement - Once implemented, they make life easier for many use cases but take away flexibility and customizability.

I think the main challenge with GraphQL actually might be doing access control on the back-end. If you have a GraphQL query that fetches many different resource types, it's a lot more difficult to establish whether or not a user is allowed to access all these resources at once (GraphQL approach) vs just a single resource at a time (REST Approach).

With GraphQL you may have to deal with partial access rights - where the user is only allowed to see part of the query's result. With REST, each request can only have a single allowed/blocked response.


GraphQL should not deal with that at all imo (authorization). the underlying service (rest/database) should be responsible for that. What GraphQL implementation should do is "tell" the underlying service who is making the request and let it handle authorization. And endpoints/fields in GraphQL schema should not be "hidden" based on who is currently logged in.

Just like in sql you are free to see all the available tables in the databases and do "select * from secret_table" it does not mean you will get back any data. The same way you should be able to query any field in graphql but get an error in case you are trying to access data that you are not supposed to.


> The essence of the REST philosophy is to deal with data as small atomic building blocks

It's actually not; that seems to be something imposed after the fact by association of REST with a thin layer over a normalized DB model.

REST is neutral about both the size and atomicity of resources.


Only when you read a filtered list of resources but not for CRUD operations (CRUD operations make sure you deal with a single resource at once). My main point is that writes are atomic.


I thought GraphQL subscriptions are specifically _not_ a "live query", so not to be used as real-time subscriptions?

Or a I reading the current status of the RFC[1] wrong? If so, cool! Would love to have something more standardised than just a 1:1 mapping from rethinkdb or thinky calls over websockets/socket.io.

[1] https://github.com/facebook/graphql/blob/master/rfcs/Subscri...


I don't know for web external APIs but for an internal network of services talking to each other I've found GraphQL to be very convenient. The "single entry point" and "describe what you want, you'll get what you described" mindset makes it powerful, when used with the graphiql trial and error interface. We barely have to document anything anymore. It seems a bit weird to use a twisted json for the queries but it works well after getting used to it. If someone wants to go this road, I'd suggest enforcing use of variables in queries (so high level caching works better) and do directly to the connection modelfor pagination.

http://graphql.org/learn/queries/#variables http://graphql.org/learn/pagination/


> The "single entry point" and "describe what you want, you'll get what you described" mindset makes it powerful, when used with the graphiql trial and error interface. We barely have to document anything anymore.

+1. It seems to me that another big advantage is speed of implementation. No matter how good your web stack is, implementing a whole bunch of separate HTTP endpoints tends to be fairly verbose and slow, and each should probably be tested thoroughly in separate modules. GraphQL mostly involves mapping data available in the GraphQL API to how it should be fetched in the backend. Lots of huge opportunities for sharing and reusing code, which is especially good for internal work where you don't necessarily need all the bells and whistles.


We tried to use GraphQL, but it in our opinion it has a major design flaw: not everything that can be expressed in JSON can be expressed in GraphQL. Since our application makes heavy use of Semantic Web (if you haven't already, checkout SPARQL and RDF which made similar promises). But this means we often end up with responses that contain URIs as map keys. This is not allowed as a filter in GraphQL, even though it is valid JSON. Other parts of our stack contain Clojure(Script), and we make use of namespaced keywords that serialize to "foo/bar" as map keys. Again something that is not allowed in the language, and would break our application. I find it a particularly odd design choice to not make GraphQL spec match the JSON spec. As there are now many things that are impossible to specify as a query, which would otherwise be valid JSON.


> I find it a particularly odd design choice to not make GraphQL spec match the JSON spec.

I can't speak for the original authors, but I can kind of understand it just by virtue of the fact that JSON is quite an ugly format that's hard to write for humans and comes with plenty of opportunity for error (missing quotes in places and the like). Their language also allows the introduction of some higher level semantics like an `Int!` to indicate that an argument can't be null.

Finally, GraphQL's query language has one major feature that makes it worth alone: you can comment things! This is something that you could never hope to do in JSON :/


Oh I totally agree there. I never quite grokked the obsession with JSON these days. But if you go to trouble of writing a query language that /specifically/ targets JSON output ... it seems like it would at least need to be comprehensive. Or come up with another output serializer/deserializer ... but that would put considerable strain on the consumers.


Personally, I found the description of (G)RPC intriguing, since the API that I've been building recently for a project resembles this most closely. I had started by trying to follow REST, then over time in an effort to reduce and simplify network requests, the API became a collection of, basically, remote function calls - organized as a map of "actions" on both client and server. I'm still pretty much a novice at building APIs, though, so the article was educational for me to compare the pros and cons of various approaches.


"Roy Fielding’s original ideas around REST are elegant and now quite widespread"

I don't think this is actually true. I think you see some ideas plucked from Fielding's paper, but describing REST in terms of performing CRUD on resources is not what Fielding's paper is largely about. There's some good RESTful APIs out there, and Stripe does have a pretty nice API for the current state of API affairs. But I'd say we're still decently early in figuring out how to more widely leverage concepts from Fielding's paper, so declaring that REST isn't fulfilling needs seems a little short to me. I'd rather see what happens if more people commit to understanding & implementing other beneficial concepts in their APIs.

"REST also has other problems. Resource payloads can be quite large because they return everything instead of just what you need, and in many cases they don’t map well to the kind of information that clients actually want, forcing expensive N + 1 query situations."

So maybe the idea here is that it's harder to implement, but you can absolutely structure REST APIs to have these same benefits, and actually you may find it to offer benefits over graphql here. There is nothing un-REST-like about returning different message formats based on the client sending you a specified request header, or providing a url parameter, or what have you. You can actually leverage this to you (& your API client's benefit) by providing a small subset of different response "shapes", or you could just allow them to specify exactly what they need for a request in a free-for-all sort of fashion. It's all up to you, but I can't buy this argument for GraphQL, it's a symptom of writing inflexible REST-ish APIs, not a damning fact of REST as a concept.


It's probably not super popular to suggest that REST isn't the be-all and end-all when it comes to web APIs, but I'm interested in a future that lets us integrate with APIs more quickly and more safely. I'm interested to hear about what people think about the subject.

(I'm the author.)


Thinking about the problems with client-side SQL seems informative[0]:

* Security; allowing evaluation rather than a describing results.

* DDL; Data Structure and Types need to be learned out-of-band. Ideally we want a sort-of interactive INFORMATION_SCHEMA over http, so our tools can inform us about types/structure and we always have updated info as we write queries.

* Performance; Stored Procedures (server-side) protect against a client making poor joins; ORMs requesting too much data, etc...

* Info about Security: client side SQL doesn't know what rights it has, or you could say that a client account can't see the big picture of DDL. The understanding of rights is all communicated through non-API means.

* Info about Performance: clients don't know rate limits, current system load, what system admins consider reasonable data sizes or reasonable analytic calculations, etc...

I don't know what GraphQL does for each of those issues, but I'd love to hear.

[0] I recently setup InfluxDB which has a (private) web interface that takes raw SQL-ish, and it's so convenient.


So I think it's a bit of a misconception that GraphQL is going to essentially map to SQL in the backend. Essentially, it's up to the service provider to decide what query paths and mutations are available in GraphQL, and design their schema accordingly.

So for example, if you allowed "comments" to be queried within a "user", you'd make sure that users could be looked up efficiently by their ID, and that comments could be loaded in bulk efficiently based on user ID. You'd also make sure that your implementation for both those operations is secure.

It's not too dissimilar to REST in that you're still very much designing what your API should look like and not exposing anything else.


I cover a lot of these thoughts in my post about misconceptions about GraphQL coming from REST here: https://dev-blog.apollodata.com/graphql-the-next-generation-...


I'm a big fan of postgrest (https://github.com/begriffs/postgrest/).

It creates a RESTful HTTP API out of your existing postgres schema. It practically eliminates the need to write a CRUD backend.


Yes, PostgREST is awesome :)

It turns out that most of what is usually expressed in implementation code assuming that you're hitting a dumb backend is just as well-served by SQL functions and schemas. This probably breaks down for larger projects, but it certainly has its place, and I can imagine something very similar be done for GraphQL.


I don't think it breaks down (as long as most of the data comes form the database). For example i know exactly how paymoapp.com (similar to basecamp) is structured , i consider it to be a fairly large system and i know for a fact that PostgREST can support that. As for the "very similar be done for GraphQL", there is https://github.com/postgraphql/postgraphql which was based on postgrest ideas. Another approach would be to build graphql on top of postgrest (my comment below)


Usually when you say CRUD people tend to think "limited functionality" and PostgREST is a bit more then "basic CRUD" as i am sure you know. It's interface is powerful enough to support GraphQL on top (since this is the topic of this thread :) https://subzero.cloud/ )


Yes it probably is. Back when I worked on LDAP, we found that people were using the protocol in unexpected ways, basically as a standardized general purpose record/field/value/query access protocol. They were doing this quite strange thing because it appeared to them better than the alternatives which were : use the underlying database access protocol (usually proprietary and undocumented) or roll their own HTTP-based thing, inventing most of the rules and encoding mechanisms themselves (sound familiar??), or use some invented-by-committee thing such as CORBA.

GrahpQL I think is meeting a similar need but with the added advantage that it was designed for the task at hand, mostly not by a committee.


> GraphQL I think is meeting a similar need but with the added advantage that it was designed for the task at hand, mostly not by a committee.

Right on. I think that its early strength of opinion and design and having a spec available from the get-go is one of GraphQL's big strengths.

There are lots of ideas about how to use REST and HTTP correctly, but for the most part the only things that are universally agreed on are that URLs are _usually_ resources (you can find lots of cases where they're not even in APIs that are popularly thought to be well designed) and that HTTP verbs _should_ map to CRUD (but again, they often don't because ideas like `PATCH` only became widespread quite late in the game).


Does anyone knows if there is a REST variant that allows to send/get associated resources nested on a single call?

Taking the customers+charges example on the article, something like:

  POST /api/customers/null/charges

  {
    name: 'John Cash',
    dob: '1990-01-01',
    charges:[
      {type:1, amount: 12.5},
      {type:1, amount: 22.8}
    ]
  }
I think the option to bundle things on a single call is wonderful, but GraphSQL seems much more complicated to implement than REST.


> Does anyone knows if there is a REST variant that allows to send/get associated resources nested on a single call?

Json API has standards for how to do related records.

http://jsonapi.org/format/#fetching-includes

> I think the option to bundle things on a single call is wonderful, but GraphSQL seems much more complicated to implement than REST.

Not necessarily.

I find GraphQL easier to write and consume but harder to handle errors and pagination. Plus caching is impossible in GraphQL without client-side special code. But I use node.js which has avery mature GraphQL library. If there isn't a library for your language it would be very difficult.

If done properly there are actually a lot of rules to doing REST. For example, that JSON you put is not REST because it doesn't have Hypermedia.

Of course, REST has the benefit of requiring no external libraries except HTTP. And no one says you have to follow all the constraints of REST (though I recommend it).


Awesome, seems like the "JSON API" specification is the REST-ish API variant I was looking for.

It has support for "related resources" and even "sparse fieldsets" (http://jsonapi.org/format/#fetching-sparse-fieldsets) which covers the main advantages I can see on GraphQL in reducing number of request and the size of the responses.


GraphQL was not meant to be a replacement for REST. It doesn't seem to right to compare an architectural concept to a query language.

The main power of GraphQL is for client developers and lies in the decoupling it provides between the client and server and the ability to fulfill the client needs in a single round trip. This is great for mobile devices with slower networks.

Introspection is a really powerful tool and you can build tools around that to do compile time checks and automatically generate class definitions based on your schema.


REST is horrible. It encourages new developers to think in terms of CRUD, which is also horrible. (https://msdn.microsoft.com/en-us/library/ms978509.aspx).

RPC is great. It encourages developers to think in terms of functions and leverage more creative ways of getting things done. Event Sourcing, CQRS, etc.

GraphQL is really neither here nor there. It's more of a convention on top of those layers. I think it's a good idea, and a good convention for libraries in different languages to rally around. Though it can be abused too: don't use it to replace a fairly simple set of REST-style queries just because you can.

Is GraphQL the future? It may be part of it. But, it's not everything. It's just a query language. Okay there's a mutation part of the spec too, and it's CRUD-like. I'd like GraphQL better if there was a more RPC-like mutation.

I hope the future includes some more interesting streaming-style protocols. The whole request/response thing is pretty boring anymore no matter how you do it.


REST is horrible. It encourages new developers to think in terms of CRUD, which is also horrible.

It encourages developers to think in terms of data, which is amazing. At the end of the day, we're just doing data transformations, that's it. GraphQL is an interesting iteration on that idea, because it focuses more on the data.

REST is simply a way to design your APIs so that they're extensible, usable, and maintainable. If you're doing something really specific, maybe it's not the best thing, but it's a really good starting point.

As per your CQRS comment.... https://martinfowler.com/bliki/CQRS.html

"Despite these benefits, you should be very cautious about using CQRS. Many information systems fit well with the notion of an information base that is updated in the same way that it's read, adding CQRS to such a system can add significant complexity. I've certainly seen cases where it's made a significant drag on productivity, adding an unwarranted amount of risk to the project, even in the hands of a capable team. So while CQRS is a pattern that's good to have in the toolbox, beware that it is difficult to use well and you can easily chop off important bits if you mishandle it."


> At the end of the day, we're just doing data transformations, that's it

That's kind of the thing though. REST doesn't do transformations. It just does CRUD of resources.

For me, when designing an application, I will think, "is representational state transfer the only thing this application will ever need to do?" If the answer is "no" then REST seems like a poor model to build around. I've never answered "yes" to that question. (That said, I spent many years mindlessly building REST apps before I started asking myself that).

Per CQRS etc., that's right. It's a tool. Use it when appropriate. Don't use it when not. (That said, the scare factor of that article is a bit high--I've had very little trouble with it since switching to it as my go-to update pattern about five years ago. Also note Martin Fowler has changed his mind on several topics lately, such as the Rich vs Anemic domain model, away from OOP and toward a more FP viewpoint. CQRS is certainly the more FP model here.)


Have you got a link to Fowlers new thoughts on Rich vs Anemic Domain Models?


I have a fairly dumb question. If one were building a standard web app... say, a super simple web store that lists products, allows someone to submit products and such. How would they use RPC instead of rest? Can you give some concrete examples of how it's different / better.


Users, for instance. If you think of users as a resource, then updating them is a PUT to /users/{id}. Fine, they can update their info. But you also need a way to change their password. That's a different entrypoint. Where does it go? /users/{id}/password? Is it a PUT or a POST?

I mean, nothing here can't be worked around. Millions of sites do this just fine. But to me, the mental model just doesn't match up. The REST model tries to present a facade that everything is a "resource". In my experience, as apps get bigger, more and more things start not quite fitting that model. You end up having to justify to yourself why certain things go on certain endpoints and soon half your API is REST and half is some hacky REST-RPC hybrid anyway. Starting with an RPC model from the get-go, you avoid these problems.

And note, REST and RPC aren't technologies. REST is a form of RPC, and you can certainly roll your own RPC protocol over REST. They're merely different ways of thinking about and modeling your application.


> If you think of users as a resource, then updating them is a PUT to /users/{id}. Fine, they can update their info.

That sounds a lot more like a user profile resource than a user resource.

> But you also need a way to change their password. That's a different entrypoint.

Or, if you have a user resource rather than separate profile I and password resources, all partial updates could go to the same resource as PATCHes.

> Where does it go?

If you spend a lot of time thinking about that, you probably aren't doing REST.

> Is it a PUT or a POST?

Is it a replacement or a new resource? Largely, that depends on your domain model, and models under which either answer (or PATCH) make sense. As long as the definitions of your resource representations tell what operations are supported, and what they mean, for endpoints represented by the relevant link relations, it really doesn't matter all that much.

> The REST model tries to present a facade that everything is a "resource". In my experience, as apps get bigger, more and more things start not quite fitting that model.

I can't imagine how anything could fail to fit the model, unless you assume (as REST does not) that resources must be completely independent so that an action on one resource has no effects on the state of other resources.


It's more natural to deal with collections and other data structures using RPC. REST works best with single items. As soon as you need to start generating reports, it can be complicated to create sane REST endpoints whereas it's trivial with RPC.

REST:

  GET /api/item/1
  GET /api/item/2
  POST /api/item { name: foo, collection: 1 }
  POST /api/item { name: bar, collection: 1 }
RPC:

  GET /api/items
  POST /api/items [{ name: foo, collection: 1 }, { name: bar, collection: 1 }]


But that's not how you would use REST and RPC though, you seem to be confusing both paradigms.

REST:

  GET    /api/items   (returns a list of items)
  GET    /api/items/1 (returns item with id#1)
  POST   /api/items   (creates a new item, "adds" it to the list)
  PUT    /api/items/1 (updates item with id#1, the whole resource needs to be sent)
  PATCH  /api/items/1 (updates item with id#1, only changes in the resource need to be sent)
  DELETE /api/items/1 (updates item with id#1, only changes in the resource need to be sent)
RPC (one possible implementation):

  GET  /api/listItems  (lists all items)
  POST /api/addItem    (adds an item)
  POST /api/updateItem (updates an item, the id is in the payload)
  GET  /api/deleteItem (deletes the item)
(Verbs don't matter in RPC, I only used POST in favour of GET in the above examples because some clients and servers enforce limits on the size of GET requests.)


Right, but RPC allows for, say, updating items 3, 4, 5, 6 in one request, whereas REST dictates that you do it over 4 separate requests. REST can also get messy if you, let's say, update all items with value > 4 as you can't

  PATCH /api/items/greaterthan/4 { active: false }
Whereas with RPC you can easily do:

  POST /api/items?greaterthan=4 { active: false }
In REST, you would do

  POST /api/item_updates/ { greatherthan: 4, active: false }
It comes off as messy to me as I would prefer to keep all endpoints around manipulating a particular model within the same controller rather than having multiple controllers for each model.

I find there's an impedance mismatch between what endpoints my solutions need and what REST dictates. And it requires a lot of acrobatics to map my solution onto the REST way.


> REST dictates that you do it over 4 separate requests

Can you point me to documentation that supports this claim? I've yet to see anything that expressly prohibits batched requests.


I can't find a proper reference but that is my understanding after having thoroughly researched it years ago.


How are your RPC examples not REST?

RPC would involve client lib. And may or may not even use HTTP as transport.

api.get_product(id=1)

api.add_product(details)


I thought we were talking about RPC over HTTP. My RPC examples don't qualify as "proper" REST where you are only supposed to GET and POST singular resources at a time. In order to post multiple, you need a collection identifier. In REST, you can't GET "items", you can get "item1" and "item2" or you can get "collection1" to which "item1" and "item2" belong.

Maybe a clearer example around running a script:

REST:

  POST /api/action { action: do_the_thing }
RPC:

  POST /api/do_the_thing
RPC is the most basic mapping of actions > endpoints whereas REST demands you organize everything into resources.


Interesting. It sounds rather similar to me as the way the difference between functional and OO programming is often described (verbs-first vs nouns-first). Is there anything to that or am I comparing apples with non-apples?


I think there is definitely some basis to your analogy :)

I think that it also stems from most ORM's focus on updating single database rows at a time even though operations frequently span across multiple rows.

As all data manipulation ultimately happens on the server, I feel it's often unnecessary to keep track of individual resources in the client.


I Kaltura.com we auto generate all client libraries in all different coding languages based on auto-generated API descriptor XML that generated using reflection from the server code. It solves most compilation, maintenance and usability issues for integrators. See our clients generator: https://github.com/kaltura/clients-generator See few more server side examples: https://restafar.com/


Funny thing is GraphQL isn't a graph query language. It returns lists. I'm working on a similar concept for a project right now, but that actually queries graphs / trees. (Sorry, not open source). As I've been working on this, it's struck me as odd how much query technology is built around tables and how hard it is to fit graphs and trees into it with reasonable performance.


Aren't you afraid you might be reinventing SPARQL ;-) ?


Nah, the next frontier is more likely everyone fully consolidating around REST as it stands today. This looks like movement for movement's sake, instead of improving developer ergonomics in an increasingly predictable world.

This is like everyone finally agreed on a plug type, and someone starts to push 3 phase. Let's get all the houses wired up, instead.


I don't know enough to justify my opinion but I really really don't get REST. Having recently had to learn a lot of web stack in a short time REST was one of the most confusing parts, and I think it's because it intrinsically doesn't make sense. I have a client and a server, they need to share a language so that the client can tell the server what do do.

REST seems like jumping from that situation to a situation where you have an object hierarchy modelled as a tree and can only perform CRUD. But why? As in the article, what if I have operations which only make sense as a combination of CRUDs on different objects? What if my type system doesn't map nicely to a tree? Why am I forcing this interface through a step of serializing it into a set of actions that it doesn't naturally map to and don't individually make sense?

In most programming languages you import a library with a set of functions or classes, and call them with parameters (binding on key-value or order or both).

If the server is to my client side another library, why do I impose CRUD on every operation and pretend that this makes sense? It's like a programming language where instead of calling a function, you have four modes of calling a function which must map to CRUD, (these have different conventions for the same thing (body vs querystring)) although there are no guarantees that they actually do map to CRUD lots of people don't follow the convention properly and everyone insists that you must try to adhere to it as purely as possible in the name of "standardization".


> REST seems like jumping from that situation to a situation where you have an object hierarchy modelled as a tree and can only perform CRUD. But why?

Because, frankly, most people doing "REST" don't understand REST. While URLs can imply a tree structure, REST does not give URLs any meaning besides resource identifiers; relations are communicated in resource representations not URLs and can be any model you like.

There is a sense in which operations are essentially CRUD-like but, in the database analogy, it's CRUD operations against views with arbitrarily complex definitions, and trigger logic for operations other than reads, not CRUD against base tables.

> As in the article, what if I have operations which only make sense as a combination of CRUDs on different objects?

Then, most likely, you have a resource for the operation, which you create to initiate the operation (or create to configure and have a specified update to trigger.)

> What if my type system doesn't map nicely to a tree?

Then you don't use a tree representation REST doesn't need URLs as anything other thab opaque identifiers and resource representations can communicate any relationship structure you want.

> If the server is to my client side another library, why do I impose CRUD on every operation and pretend that this makes sense?

Well, one, CRUD against resources with unconstrained definitions is part fairly flexible and universal model, so it probably makes sense as long as you don't artificially constrain the model, and second, REST isn't always the right choice anyway. There is nothing wrong with doing RPC or other non-REST things (and it would be better if we could accept that so people could stop feeling obligated to say that the non-REST things they are doing are "REST" and muddying the waters.)


In case you're not aware, the design you're proposing is generally called "RPC" (remote procedure call). In many ways, it's much more intuitive, but there are tradeoffs. This article might make interesting reading: http://etherealbits.com/2012/12/debunking-the-myths-of-rpc-r....


> Nah, the next frontier is more likely everyone fully consolidating around REST as it stands today. This looks like movement for movement's sake, instead of improving developer ergonomics in an increasingly predictable world.

I certainly agree with the plug idea -- consistency is very good -- but I think that we should really consider what REST is actually buying us at the end of the day.

Most people prefer to integrate through a well-maintained SDK where one is available rather than shelling out to HTTP directly (imagine calling the APIs of something like AWS over raw HTTP), and if that's fairly widespread, it might not be too bad of a thing to start moving the underlying transmission protocol of those SDKs over to something that's more flexible and more efficient (like GraphQL, but also maybe other things).

A comparable analog might be when Google experimented with replacing TCP and TLS with a new protocol called QUIC that operates over UDP [1]. Because QUIC still exposes an HTTP/2 API, providers and server-side infrastructure could potentially move over to it relatively painlessly.

[1] https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp...


Have you ever actually used it? It does greatly improve developer ergonomics on the client side. Check out Apollo + React (or another framework).


Clojurescripters liked to leverage GraphQL (david nolen)


More as an inspiration, though. You don't have to define a spec with om.next queries, for example.

Walmart Labs just released a library about a week ago that does actual GraphQL, and there are libraries for the front-end but they aren't widely adopted and not really like Relay, afaik


rpc looks to be the most flexible of the bunch.

what are the downsides of rpc approach as compared to rest and graphql?


By Betteridge's Law of Headlines, the answer is "No".

REST describes the architecture of the web, which most websites already conform to. Developers could design web APIs more like web pages, but standard media types must be adopted first. JSON-LD is a strong contender at that.


> By Betteridge's Law of Headlines, the answer is "No".

I feel a little bad for phrasing a title like a question, but an alternative working title was "The Future of API Paradigms". I tried to touch on a variety of possible paths beyond GraphQL, including that REST might be the right way forward.

> REST describes the architecture of the web, which most websites already conform to. Developers could design web APIs more like web pages, but standard media types must be adopted first. JSON-LD is a strong contender at that.

Yes, quite possible, but it seems to me that REST and hypermedia aren't actually buying us all that much when it comes to APIs. Theoretically it makes them more discoverable and future proof, but in practice I find that no one really uses these features, and even when they're available, they're often used improperly.

On the other hand, something like GraphQL has some pretty significant advantages in the form of very flexible querying and very efficient message transport (you get everything in one call instead of _N_).


Having a specification that tells you exactly how to do a particular thing can be more expedient than being open to interpretation. But short-term benefits aren't exactly a goal of REST:

>REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don’t think they need to design past the current release. [0]

There isn't anything about REST that is opposed to flexible querying or fetching multiple resources in one request. Isn't that what web pages do already? I've used very complicated search forms and fetched lots of related data on the same page. What makes it possible is the use of standards like query strings, x-www-urlformencoded, multipart/formdata, and HTML itself. Machine clients may just need their own media types.

[0] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: