Skip to content

Snowfork/snowbridge-subsquid

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

201 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Squid Snowbridge project

A Squid project to index snowbridge transfers. It accumulates transfer events from multiple chains(i.e. Ethereum, Base, Bridgehub, Assethub, Hydration) and serves them via GraphQL API.

Summary

Prerequisites

  • node 22.x
  • docker
  • npm -- note that yarn package manager is not supported

Quickly running the sample

Example commands below use sqd. Please install it before proceeding.

# 1. Install dependencies
npm ci

# 2. Copy the env and make change if necessary
cp .env.example .env

# 3. Start target Postgres database and detach
sqd up

# 4. Build the project
sqd build

# 5. Generate entities, then patch generated relations before migration
sqd codegen && sqd codegen:patch

# 6. Generate database migration
sqd migration:generate && sqd migration:apply

# 7. Start the squid processor for ethereum
sqd process:ethereum

# 8. Start the squid processor for bridgehub
sqd process:bridgehub

# 9. Start the squid processor for assethub
sqd process:assethub

# 10. Start the graphql api
sqd serve

A GraphiQL playground will be available at localhost:4350/graphql.

Running Full Ecosystem with Docker Compose

Use this flow to run every app from full.ecosystem.config.js as containers.

Prerequisites

Build and start all services

docker compose -f docker-compose.full.yml build
docker compose -f docker-compose.full.yml up -d

Start only selected services

docker compose -f docker-compose.full.yml up -d ethereum-v2 bridgehub-v2 assethub-v2

View logs

# all services
docker compose -f docker-compose.full.yml logs -f

# one service
docker compose -f docker-compose.full.yml logs -f mythos-v2

Stop or restart services

# stop all
docker compose -f docker-compose.full.yml down

# restart one service
docker compose -f docker-compose.full.yml restart neuroweb-v2

Validation steps

# validate compose file
docker compose -f docker-compose.full.yml config

# smoke test one processor
docker compose -f docker-compose.full.yml up ethereum-v2

# smoke test full set in background
docker compose -f docker-compose.full.yml up -d
docker compose -f docker-compose.full.yml ps

Database Backup

Use the backup script to create daily PostgreSQL snapshots and keep only the most recent 7 days.

Run backup manually

./scripts/backup-db.sh .env
  • Backup files are written to data/backups/.
  • File naming format: <DB_NAME>_YYYYMMDDTHHMMSSZ.dump.
  • By default, retention keeps the last 7 days (RETENTION_DAYS=7).

Install nightly cron job

./scripts/install-backup-cron.sh .env

Default schedule is 0 2 * * * (daily at 02:00 server time).

To customize the schedule:

CRON_SCHEDULE="30 1 * * *" ./scripts/install-backup-cron.sh .env

Restore from a snapshot

pg_restore \
  --host="$DB_HOST" \
  --port="$DB_PORT" \
  --username="$DB_USER" \
  --dbname="$DB_NAME" \
  --clean --if-exists \
  data/backups/<DB_NAME>_<TIMESTAMP>.dump

Development flow

1. Define database schema

Start development by defining the schema of the target database via schema.graphql. Schema definition consists of regular graphql type declarations annotated with custom directives. Full description of schema.graphql dialect is available here.

2. Generate TypeORM classes

Mapping developers use TypeORM entities to interact with the target database during data processing. All necessary entity classes are generated by the squid framework from schema.graphql. This is done by running npx squid-typeorm-codegen or (equivalently) sqd codegen command.

After sqd codegen, run sqd codegen:patch before generating migrations. This applies project-specific relation settings to generated models.

3. Generate database migration

All database changes are applied through migration files located at db/migrations. squid-typeorm-migration(1) tool provides several commands to drive the process. It is all TypeORM under the hood.

# Connect to database, analyze its state and generate migration to match the target schema.
# The target schema is derived from entity classes generated earlier.
# Don't forget to compile your entity classes beforehand!
npx squid-typeorm-migration generate

# Create template file for custom database changes
npx squid-typeorm-migration create

# Apply database migrations from `db/migrations`
npx squid-typeorm-migration apply

# Revert the last performed migration
npx squid-typeorm-migration revert

Available sqd shortcuts:

# Run after `sqd codegen` and before generating migrations
sqd codegen:patch

# Build the project, remove any old migrations, then run `npx squid-typeorm-migration generate`
sqd migration:generate

# Run npx squid-typeorm-migration apply
sqd migration:apply

4. Generate TypeScript definitions for chain events, calls and storage

This is an optional part, but it is very advisable.

Event, call and runtime storage data come to mapping handlers as raw untyped json. While it is possible to work with raw untyped json data, it's extremely error-prone and the json structure may change over time due to runtime upgrades.

Squid framework provides a tool for generating type-safe wrappers around events, calls and runtime storage items for each historical change in the spec version. See the typegen page for different chains.

Project conventions

Squid tools assume a certain project layout.

  • All compiled js files must reside in lib and all TypeScript sources in src. The layout of lib must reflect src.
  • All TypeORM classes must be exported by src/model/index.ts (lib/model module).
  • Database schema must be defined in schema.graphql.
  • Database migrations must reside in db/migrations and must be plain js files.
  • squid-*(1) executables consult .env file for a number of environment variables.

See the full desription in the documentation.

Graphql server extensions

Basically transfer status should be resolved by these two queries.

  • transferStatusToPolkadots
  • transferStatusToEthereums

It is possible to extend squid-graphql-server(1) with custom type-graphql resolvers and to add request validation. For more details, consult docs.

Deploy to subsquid cloud

Follow the guides in:

first login with the api key with:

sqd auth -k YOUR_API_TOKEN

then deploy to cloud with:

sqd deploy --org snowbridge .

How to use the API

UI or 3rd teams can query transfers through Snowbridge from this indexer, explore https://snowbridge.squids.live/snowbridge-subsquid-polkadot@v1/api/graphql for the querys we support.

For easy usage we aggregate all data to two queries, which is transferStatusToEthereumV2s for direction to ethereum and transferStatusToPolkadotV2s for the other direction. A demo script for reference:

./scripts/query-transfers.sh

and the result is something like:

"transferStatusToPolkadotV2s": [
      {
        "txHash": "0x53597b6f98334a160f26182398ec3e7368be8ca7aea3eea41d288046f3a1999d",
        "status": 1, // 0:pending, 1: completed 2: failed
        "channelId": "0xc173fac324158e77fb5840738a1a541f633cbec8884c6a601c567d2b376a0539",
        "destinationAddress": "0x628119c736c0e8ff28bd2f42920a4682bd6feb7b000000000000000000000000",
        "messageId": "0x00d720d39256bab74c0be362005b9a50951a0909e6dabda588a5d319bfbedb65",
        "nonce": 561,
        "senderAddress": "0x628119c736c0e8ff28bd2f42920a4682bd6feb7b",
        "timestamp": "2025-01-20T07:09:47.000000Z",
        "tokenAddress": "0xba41ddf06b7ffd89d1267b5a93bfef2424eb2003",
        "amount": "68554000000000000000000"
      },
      ...
],
"transferStatusToEthereumV2s": [
      {
        "txHash": "0xb57627dbcc89be3bdaf465676fced56eeb32d95855db003f1e911aa4c3769059",
        "status": 1, // 0:pending, 1: completed 2: failed
        "channelId": "0xc173fac324158e77fb5840738a1a541f633cbec8884c6a601c567d2b376a0539",
        "destinationAddress": "0x2a9b5c906c6cac92dc624ec0fa6c3b4c9f2e7cc2",
        "messageId": "0x95c52ffe4f976c99bcfe8d76f6011e62b7f215ada834e8c0bcf6538b31b1bf87",
        "nonce": 152,
        "senderAddress": "0x4a79eee26f5dab7c230f7f2c8657cb541a4b8e391c8357f5eb51413f249ddc13",
        "timestamp": "2025-01-20T04:10:48.000000Z",
        "tokenAddress": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2",
        "amount": "8133242931806029953"
      },
      ...
]

Index Recovery

Data snapshotting and multi region db replication are not available for general cloud user plans.

Ideally, we should periodically create database snapshots. In case the database crashes, we can retrieve a checkpoint block number from several days back in each chain’s history. By default, this is set to 7 days. Run the following script:

./scripts/fetch-checkpoint.js

Then write the retrieved checkpoint into the environment section for each chain in squid.yaml.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages