Manage the configurations for your ComposeDB server.
When you start the daemon using the
ceramic daemon command, if a configuration file is not present in the expected path
$HOME/.ceramic/daemon.config.json, the command will create a new
daemon.config.json file with the following defaults:
"ethereum-rpc-url": "https://eg_infura_endpoint" // Replace with an Ethereum RPC endpoint to avoid rate limiting
"log-level": 2, // 0 is most verbose
"name": "mainnet", // Connect to mainnet, testnet-clay, or dev-unstable
"mode": "fs", // volume storage option shared here, can be replaced by S3 mode & bucket
"local-directory": "/path_for_ceramic_statestore", // Defaults to $HOME/.ceramic/statestore
These are the configurations you should pay close attention to, described below on this page:
- Networks & Environments
- SQL Database
- History Sync
- IPFS Process
ComposeDB configurations can be set in two places: using the config file and using the CLI. Although we recommend making changes using the config file, for completeness this guide demonstrates both.
daemon.config.json file (recommended)
The config file is a JSON file used to set durable, long-lived node configurations. After making changes to the config file, be sure to save your changes then restart the daemon for them to take effect.
This is the preferred method for setting configs, especially for stable production usage.
Using the CLI
The CLI can be used to set temporary, short-lived node configurations. To do this, pass designated CLI flags to the daemon at startup. This method is only recommended in a scripted test environment or when starting the daemon in a singleton way for test purposes.
When using the CLI, always execute the same flags each time the node restarts or else you will reset to default settings.
Networks are collections of nodes that communicate, store data, and share data. When running a ComposeDB server, you need to decide which network it will connect to.
Each network has its own string designation. Find more information about the networks here.
|Primary public production network
|Primary public test network
|Core protocol debugging network, very experimental
|Local instance for development
Networks are completely isolated, distinct development environments. Models and data that exist on one network do not exist on other networks, and are not portable.
Setting the network
The system will default to
testnet-clay if a network is not set.
- Config File
# Connect to testnet-clay network on startup
ceramic daemon --network "testnet-clay"
To switch from one network to another, such as from
- Config File
ceramic daemon --network "mainnet"
Be mindful that models and data are not portable across networks.
If you seek to switch networks locally you need to either drop or move your default DB. To prevent data loss the preferred way is to simply move/rename the database.
- Stop your node/ceramic daemon
- Depending on your default database configuration execute the following commands
mv ~/.ceramic/indexing.sqlite ~/.ceramic/indexing.sqlite.NETWORK
ALTER DATABASE ceramic RENAME TO ceramic_NETWORK; \q
- Restart your ceramic daemon with the newly desired network config and compose DB will setup the new default environment automatically
To switch back between networks simply follow the above steps again and return the desired backup to the default values:
Postgres: Default DB Name:
One of the most important configurations that you must set up is your database. This database will be used to store an index of data for all models used by your app.
Available SQL databases
|recommended for everything besides early prototyping
|Default option; can only be run locally, recommended for early prototyping
Only Postgres is currently supported for production usage.
By default, Ceramic nodes will only index documents they observe using pubsub messages. In order to index documents created before the node was deployed or configured to index some models, History Sync needs to be enabled on the Ceramic node, in the
|IPFS running in same compute process as Ceramic; recommended for early prototyping
|IPFS running in separate compute process; recommended for production and everything besides early prototyping
To run a Ceramic node in production, it is critical to persist the Ceramic state store and the IPFS datastore. The form of storage you choose should also be configured for disaster recovery with data redundancy, and some form of snapshotting and/or backups.
Loss of this data can result in permanent loss of Ceramic streams and will cause your node to be in a corrupt state.
The Ceramic state store and IPFS datastore are stored on your machine's filesystem by default. The Ceramic state store defaults to
$HOME/.ceramic/statestore. The IPFS datastore defaults to
ipfs/blocks located wherever you run IPFS.
The fastest way to ensure data persistence is by mounting a persistent volume to your instances and configuring the Ceramic and IPFS nodes to write to the mount location. The mounted volume should be configured such that the data persists if the instance shuts down.
You can also use AWS S3 for data storage which is supported for both Ceramic and IPFS. Examples of the configuration are shared on the Ceramic docs here.
The IPFS datastore stores the raw IPFS blocks that make up Ceramic streams. To prevent data corruption, use environment variables written to your profile file, or otherwise injected into your environment on start so that the datastore location does not change between reboots.
Note: Switching between data storage locations is an advanced feature and should be avoided. Depending on the sharding implementation you may need to do a data migration first. See the datastore spec for more information.
Ceramic State Store
The Ceramic State Store holds state for pinned streams and the acts as a cache for the Ceramic streams that your node creates or loads. To ensure that the data you create with your Ceramic node does not get lost you must pin streams you care about and you must ensure that the state store does not get deleted.
Metrics are a critical part of running a production Ceramic node. They allow you to monitor the health of your node and the network, and to debug issues when they arise.
Js-ceramic produces metrics in the Prometheus format. You can configure your Ceramic node to expose these metrics on an HTTP endpoint, which can then be scraped by a Prometheus server. Alternatively, you can configure the Ceramic node to send metrics to an Opentelemetry collector endpoint.
metrics section of the daemon config, set the
prometheus field to
true and add a port number:
"prometheus-exporter-port": 9464 # or whatever port you want to use
Opentelemetry collector endpoint
metrics section of the daemon config, set the
metrics-exporter-enabled field to
true and add a collector host endpoint:
Depending on the version of js-ceramic, environment variables may be available to be set metrics options. See the js-ceramic docs for more information.