In this section, we assume you use Hasura GraphQL Engine integration to power your API.
Before starting to do client integration, it's good to know the specifics of Hasura GraphQL protocol implementation and the general state of the GQL ecosystem.
By default, Hasura generates three types of queries for each table in your schema:
- Generic query enabling filters by all columns
- Single item query (by primary key)
- Aggregation query (can be disabled)
All the GQL features such as fragments, variables, aliases, directives are supported, as well as batching.
Read more in Hasura docs.
It's important to understand that GraphQL query is just a POST request with JSON payload, and in some instances, you don't need a complicated library to talk to your backend.
By default, Hasura does not restrict the number of rows returned per request, which could lead to abuses and heavy load to your server. You can set up limits in the configuration file. See 12.5. hasura for details. But then you will face the need to paginate over the items if the response does not fit into the limits.
From Hasura documentation:
Hasura GraphQL engine subscriptions are live queries, i.e., a subscription will return the latest result of the query and not necessarily all the individual events leading up to it.
This feature is essential to avoid complex state management (merging query results and subscription feed). In most scenarios, live queries are what you need to sync the latest changes from the backend.
If the live query has a significant response size that does not fit into the limits, you need one of the following:
- Paginate with offset (which is not convenient)
- Use cursor-based pagination (e.g., by an increasing unique id).
- Narrow down request scope with filtering (e.g., by timestamp or level).
Ultimately you can get "subscriptions" on top of live quires by requesting all the items having ID greater than the maximum existing or all the items with a timestamp greater than now.
Hasura is compatible with subscriptions-transport-ws library, which is currently deprecated by still used by the majority of the clients.
The purpose of DipDup is to create indexers, which means you can consistently reproduce the state as long as data sources are accessible. It makes your backend "stateless" in a sense because it's tolerant of data loss.
However, you might need to introduce a non-recoverable state and mix indexed and user-generated content in some cases. DipDup allows marking these UGC tables "immune", protecting them from being wiped. In addition to that, you will need to set up Hasura Auth and adjust write permissions for the tables (by default, they are read-only).
Lastly, you will need to execute GQL mutations to modify the state from the client side. Read more about how to do that with Hasura.