DipDup supports several database engines for development and production. The obligatory field kind specifies which engine has to be used:

  • sqlite
  • postgres (and compatible engines)

Database engines article may help you choose a database that better suits your needs.


path field must be either path to the .sqlite3 file or :memory: to keep a database in memory only (default):

  kind: sqlite
  path: db.sqlite3
kindalways 'sqlite'
pathPath to .sqlite3 file, leave default for in-memory database


Requires host, port, user, password, and database fields. You can set schema_name to values other than public, but Hasura integration won't be available.

  kind: postgres
  host: db
  port: 5432
  user: dipdup
  password: ${POSTGRES_PASSWORD:-changeme}
  database: dipdup
  schema_name: public
kindalways 'postgres'
databaseDatabase name
schema_nameSchema name
immune_tablesList of tables to preserve during reindexing
connection_timeoutConnection timeout in seconds

You can also use compose-style environment variable substitutions with default values for secrets and other fields. See Templates and variables for details.

Immune tables

In some cases, DipDup can't continue indexing with an existing database. See 5.3. Reindexing for details. One of the solutions to resolve reindexing state is to drop the database and start indexing from scratch. To achieve this, either invoke the schema wipe command or set an action to wipe in the advanced.reindex config section.

You might want to keep several tables during schema wipe if data in them is not dependent on index states yet heavy. A typical example is indexing IPFS data — rollbacks do not affect off-chain storage, so you can safely continue after receiving a reorg message.

    - token_metadata
    - contract_metadata

immune_tables is an optional array of table names that will be ignored during schema wipe. Once an immune table is created, DipDup will never touch it again; to change the schema of an immune table, you need to perform a migration manually. Check schema export output before doing this to ensure the resulting schema is the same TortoiseORM would generate.