A Squid project to process Substrate pallet-contracts data and serve it through GraphQL with a dedicated model tailored for contracts.
It has been extended with capabilities to configure and spin up a new processor for different chains in the same project. Note that this is different from running a multi-chain processor as defined by Subsquid since Squid-ink stores the data model for different chains in different databases while the Subsquid multi-chain processor stores the data of all chains in the same database.
Squid-ink is using the latest Fire Squid version of Subsquid.
- node 16.x
- docker
You can find out published the published Docker images at Epirus Substrate registry.
Example docker-compose.yml
for Rococo Canvas
version: "3.5"
x-environment: &envs
CHAIN: rococo
PROCESSOR_NAME: squid-rococo
DB_NAME: squid
DB_HOST: db
DB_PASS: squid
DB_PORT: 5432
PROCESSOR_PROMETHEUS_PORT: 3000
GQL_PORT: 4000
ARCHIVE_ENDPOINT: https://SQUID_ARCHIVE/graphql
WS_ENDPOINT: wss://NODE_WSS_ENDPOINT
BALANCES_STORE: system
services:
squid-db:
container_name: squid-db
image: postgres:15
restart: on-failure
volumes:
- /var/lib/postgresql/data
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
ports:
- "${DB_PORT}:5432"
processor:
container_name: processor
image: squid-local:0.0.15
environment: *envs
command: sh -c "(npx squid-typeorm-migration generate); sleep 5 && npx sqd process"
depends_on:
- squid-db
query:
container_name: query
image: squid-local:0.0.15
restart: on-failure
environment: *envs
ports:
- "4351:4350"
- "5005:5000"
depends_on:
- processor
- squid-db
Before you running the service you need to create the initial database model. At this moment the migrations are not pushed neither in the git repository nor the image.
Start squid DB
sqd up
Generate DB schema (If it hasn't been created before)
sqd migration:generate
Apply DB schema
sqd migration:apply
To run the unit tests, use the command
npm test
To generate a test coverage report, execute
npm run test:coverage
Install Squid CLI
npm i -g @subsquid/cli@latest
or
brew tap subsquid/cli
brew install sqd
Generate data models
sqd codegen
The Squid processor extracts block, extrinsic and event data from a Squid Archive to perform transformation and storage. As such, a Squid Archive endpoint is always required when running the processor. Subsquid provides Archive endpoints for a myriad of Substrate chains and parachains which can be found in the archive-registry. If the archive registry does not contain endpoints for the chain to index, the Squid Archive can be run locally.
The ./archive
folder contains an example docker-compose.yml
file for running a Squid archive. Multiple node websocket endpoints can be specified in the command section of the ingest service to speed up processing.
To run the archive locally
docker compose -f archive/docker-compose.yml up
Inspect your archive at http://localhost:8888
. Run the processor with
processor.setDataSource({
archive: `http://localhost:8888/graphql`,
chain: // your network endpoint here
});
To drop the archive, run
docker compose -f archive/docker-compose.yml down -v
The following steps are required when adding a new chain to the project:
Update the typegen.json
file to update the outDir, specVersions and typesBundle fields.
{
"outDir": "src/chains/<your-chain-name>/types", // the directory where the type-safe wrappers are stored
"specVersions": "http://localhost:8888/graphql", // Fire Squid archive endpoint
"typesBundle": "<yourTypesBundle>.json", // see types bundle section below
"events": [ // list of events to generate.
"Balances.Transfer"
],
"calls": [ // list of calls to generate
"Contracts.call"
],
"storage": [
"System.Account" // list of storage items. To generate wrappers for all storage items, set "storage": true
]
}
Run the squid-substrate-typegen
command (Fire Squid archive needs to be running)
npx squid-substrate-typegen typegen.json
Since each chain has different versions, and possibly, types, the wrappers generated in the above step is specific to the chain. We need to normalise the calls, events and storage types in order for the handlers in the processor to be able to run agnostically. An example of how type normalisation should be done can be found in the src/chains/local
folder.
Add the properties of your chain to the chainConfig
object in chain-config.ts
.
A .env
file is required per chain. It is recommended to name it as .env.<chain-name>
. The .env
file serves as an example.
- Build the project
npm ci
- Build squid files
sqd build
- Start squid DB
sqd up
- Generate DB schema (If it hasn't been created before)
sqd migration:generate
- Start squid process (should begin to ingest blocks)
sqd process
- To start the graphql server open the separate terminal and run
sqd serve
Substrate chains which have blocks with metadata versions below 14 don't provide enough information to decode their data. For those chains external type definitions are required.
Type definitions (typesBundle
) can be given to squid tools in two forms:
- as a name of a known chain (currently only
kusama
) - as a json file of a structure described below.
{
"types": {
"AccountId": "[u8; 32]"
},
"typesAlias": {
"assets": {
"Balance": "u64"
}
},
"versions": [
{
"minmax": [0, 1000], // block range with inclusive boundaries
"types": {
"AccountId": "[u8; 16]"
},
"typesAlias": {
"assets": {
"Balance": "u32"
}
}
}
]
}
.types
- scale type definitions similar to polkadot.js types.typesAlias
- similar to polkadot.js type aliases.versions
- per-block range overrides/patches for above fields.
All fields in types bundle are optional and applied on top of a fixed set of well known frame types.