Upgrading from 5.1 to 5.2
This section describes how to upgrade a Corda cluster from 5.1 to 5.2. It lists the required prerequisites and describes the following steps required to perform an upgrade:
- Back Up the Corda Database
- Test the Migration
- Scale Down the Running Corda Worker Instances
- Migrate the Corda Cluster Database
- Migrate State Manager Databases
- Managing 5.2 Multi-Database Support
- Setting Search Paths
- Migrate the Virtual Node Databases
- Update Kafka Topics
- Launch the Corda 5.2 Workers
- Upload the Corda 5.2 CPIs to virtual nodes
For information about how to roll back an upgrade, see Rolling Back.
Following a platform upgrade, Network Operators should upgrade their networks. For more information, see Upgrading an Application Network.
Prerequisites
This documentation assumes you have full administrator access to the Corda CLI A command line tool that supports various Corda-related tasks, including Corda Package Installer (CPI) creation and Corda cluster management. 5.2 and Kafka. You must ensure that you can create a connection to your Kafka deployment. You can check this by confirming you can list Kafka topics by running a command such as the following:
kafka-topics --bootstrap-server=prereqs-kafka.test-namespace:9092 --list
Back Up the Corda Database
You must create a backup of all schemas in your database:
- Cluster — name determined at bootstrap. For example,
CONFIG
. - Crypto — name determined at bootstrap. For example,
CRYPTO
. - RBAC — name determined at bootstrap. For example,
RBAC
. - Virtual Nodes schemas:
vnode_crypto_<holding_id>
vnode_uniq_<holding_id>
vnode_vault_<holding_id>
Test the Migration
Follow the steps in Migrate the Corda Cluster Database and Migrate State Manager Databases on copies of your database backups to ensure that the database migration stages are successful before proceeding with an upgrade of a production instance of Corda.
This reveals any issues with migrating the data before incurring any downtime. It will also indicate the length of downtime required to perform a real upgrade, allowing you to schedule accordingly.
For information about rolling back the Corda 5.0 to Corda 5.2 upgrade process, see Rolling Back.
Scale Down the Running Corda Worker Instances
You can scale down the workers using any tool of your choice. For example, the run the following commands if using kubectl
:
kubectl scale --replicas=0 deployment/corda-crypto-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-db-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-flow-mapper-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-flow-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-membership-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-p2p-gateway-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-p2p-link-manager-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-persistence-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-rest-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-token-selection-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-uniqueness-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-verification-worker -n <corda_namespace>
If you are scripting these commands, you can wait for the workers to be scaled down using something similar to the following:
while [ "$(kubectl get pods --field-selector=status.phase=Running -n <corda_namespace> | grep worker | wc -l | tr -d ' ')" != 0 ]
do
sleep 1
done
Migrate the Corda Cluster Database
To migrate the cluster database schemas, do the following:
Generate the required SQL scripts using the
spec
sub-command of the Corda CLIdatabase
command. For example:This example generates the schemas into a directory namedcorda-cli.sh database spec -c -l ./sql_updates -g="" --jdbc-url=<DATABASE-URL> -u postgres
corda-cli.cmd database spec -c -l ./sql_updates -g="" --jdbc-url=<DATABASE-URL> -u postgres
sql_updates
but you can choose any output directory. Thedatabase spec
command generates all but state manager SQL by default. This example does not specify any overrides so the default schema names are used.-g=""
generates schema-aware SQL.Verify the generated SQL scripts and apply them to the Postgres database. For example:
psql -h localhost -p 5432 -f ./sql_updates/config.sql -d cordacluster -U postgres psql -h localhost -p 5432 -f ./sql_updates/crypto.sql -d cordacluster -U postgres psql -h localhost -p 5432 -f ./sql_updates/rbac.sql -d cordacluster -U postgres
Grant the necessary permissions:
cluster
user - set up by the Helm chart, in Corda 5.1 from the propertydb.cluster.username.value
.corda
in this example. In Corda 5.2 this property does not exist and is set elsewhere in the config. More information about this is provided later in this document.rbac
user - set up by the Helm chart from the propertydb.rbac.username.value
.rbac_user
in this example.crypto
user - set up by the Helm chart from the propertydb.crypto.username.value
.crypto_user
this example.
For example:
psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA config TO corda" -p 5432 -d cordacluster -U postgres psql -h localhost -c "GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA config TO corda" -p 5432 -d cordacluster -U postgres psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA rbac TO rbac_user" -p 5432 -d cordacluster -U postgres psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA crypto TO crypto_user" -p 5432 -d cordacluster -U postgres
Migrate State Manager Databases
To migrate the state manager database schemas, do the following:
Generate the required SQL scripts using the
spec
sub-command of the Corda CLIdatabase
command. For example:The namecorda-cli.sh database spec -c -l ./sql_updates -s="statemanager" -g="statemanager:state_manager" --jdbc-url=<DATABASE-URL> -u postgres
corda-cli.cmd database spec -c -l ./sql_updates -s="statemanager" -g="statemanager:state_manager" --jdbc-url=<DATABASE-URL> -u postgres
state_manager
is always used for the state manager schema in every 5.1 deployment.Verify the generated SQL scripts and apply them to the Postgres database. For example:
psql -h localhost -p 5432 -f ./sql_updates/statemanager.sql -d cordacluster -U postgres
Grant the necessary permissions. The database role username for the state manager in Corda 5.2 is the property specified by
workers.<worker_type>.stateManager.db.username
in the helm chart. However, if you set a value, Corda uses thedb.cluster.username.value
value is used. This is the same behavior as Corda 5.1.For example:
psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA config TO corda" -p 5432 -d cordacluster -U postgres psql -h localhost -p 5432 -f ./sql_updates/statemanager.sql -d cordacluster -U postgres
Managing 5.2 Multi-Database Support
Corda 5.2 introduced support for multiple state manager databases, based on type of state. Corda 5.1 supported only one state manager database where all states were stored. There is no migration path that splits existing states between new databases and Corda deployments that were running 5.1 must continue to use a single database for all states.
However, to keep cluster and state manager database configuration aligned, cluster database configuration was modified slightly. As a result, as part of the migration to 5.2, you must add new values in the YAML provided to the Helm chart. The 5.1 YAML schema also contains values that are no longer compatible and must be removed. To update your Helm chart YAML, do the following:
Remove the following sections, including the top-level label:
db: <-- remove including this line ...
bootstrap: db: cluster: <-- remove including this line ...
Add the following sections, updating the examples with the equivalent values from the Corda 5.1 deployment being upgraded:
databases: - id: "default" name: "cordacluster" port: 5432 type: "postgresql" host: "prereqs-postgres" - id: "state-manager" name: "cordacluster" port: 5432 type: "postgresql" host: "prereqs-postgres"
bootstrap: ... db: databases: <-- this is new, it references the db in the snippet above - id: "default" username: value: "postgres"
Add the following lines to map the multi-database support back to the 5.1 mono-database. 5.1 used a fixed schema name of
state_manager
for the state manager database which is hardcoded below in the partition field.stateManager: flowCheckpoint: type: Database storageId: "state-manager" <-- this references the db in snippet above partition: "state_manager" flowMapping: type: Database storageId: "state-manager" partition: "state_manager" flowStatus: type: Database storageId: "state-manager" partition: "state_manager" keyRotation: type: Database storageId: "state-manager" partition: "state_manager" p2pSession: type: Database storageId: "state-manager" partition: "state_manager" tokenPoolCache: type: Database storageId: "state-manager" partition: "state_manager"
Configure each worker with individual user access to the state manager database. Corda only reads this from a secret, so you must create that secret with the 5.1 username in it, as the field
corda-username
. For example:kubectl create secret generic -n <corda_namespace> prereqs-postgres-user --from-literal=corda-username=corda
Add the accompanying YAML additions that pull the username out of the secret, which existed in 5.1:
workers: crypto: config: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" stateManager: keyRotation: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" db: config: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" persistence: config: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" tokenSelection: config: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" stateManager: tokenPoolCache: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" uniqueness: config: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" flow: stateManager: flowCheckpoint: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" flowMapper: stateManager: flowMapping: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" p2pLinkManager: stateManager: p2pSession: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" rest: stateManager: keyRotation: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password" flowStatus: username: valueFrom: secretKeyRef: name: "prereqs-postgres-user" key: "corda-username" password: valueFrom: secretKeyRef: name: "prereqs-postgres" key: "corda-password"
Setting Search Paths
In Corda 5.1, schema names were injected everywhere as part of JDBC URLs. As of Corda 5.2, to support databases behind proxies, search_path
was removed from JDBC URLs. As a result, as part of the migration process, you must set the search_path
of the cluster database at SQL level. For example:
psql -h localhost -c "ALTER ROLE "corda" SET search_path TO config, state_manager" -p 5432 -d cordacluster -U postgres
psql -h localhost -c "ALTER ROLE "rbac_user" SET search_path TO rbac" -p 5432 -d cordacluster -U postgres
psql -h localhost -c "ALTER ROLE "crypto_user" SET search_path TO crypto" -p 5432 -d cordacluster -U postgres
Migrate the Virtual Node Databases
Migrating virtual node databases requires the short hash holding ID of each virtual node. For more information, see Retrieving Virtual Nodes.
To migrate the virtual node databases, do the following:
Create a file containing the short hash holding IDs of the virtual nodes to migrate.
Generate the required SQL scripts using the
platform-migration
sub-command of the Corda CLIvnode
command. For example, if you save the holding IDs in./sql_updates/holdingIds
:corda-cli.sh vnode platform-migration --jdbc-url=jdbc:postgresql://host.docker.internal:5432/cordacluster -u postgres -i ./sql_updates/holdingIds -o ./sql_updates/vnodes.sql
corda-cli.cmd vnode platform-migration --jdbc-url=jdbc:postgresql://host.docker.internal:5432/cordacluster -u postgres -i ./sql_updates/holdingIds -o ./sql_updates/vnodes.sql
Review the generated SQL and apply it as follows:
psql -h localhost -p 5432 -f ./sql_updates/vnodes.sql -d cordacluster -U postgres
Grant the required permissions for the user for each virtual node for the three database schemas. Corda creates these users when it creates the schemas and so you must extract the credentials from the database using the previously created file of holding IDs. R3 recommends that you script this stage, as follows:
while read HOLDING_ID; do # In Corda 5.0 all virtual node schemas and users are created by Corda, so we need to extract their names from the db # Grab the schema names for this holding Id VAULT_SCHEMA=$(psql -h localhost -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'vnode_vault%'" -p 5432 -d cordacluster -U postgres | tr -d ' ' | grep -i $HOLDING_ID | grep vault ) CRYPTO_SCHEMA=$(psql -h localhost -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'vnode_crypto%'" -p 5432 -d cordacluster -U postgres | tr -d ' ' | grep -i $HOLDING_ID | grep crypto ) UNIQ_SCHEMA=$(psql -h localhost -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'vnode_uniq%'" -p 5432 -d cordacluster -U postgres | tr -d ' ' | grep -i $HOLDING_ID | grep uniq) # Get the vault users associated with this holding id VAULT_DDL_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep vault | grep ddl) VAULT_DML_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep vault | grep dml) # Get the crypto users associated with this holding id CRYPTO_DDL_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep crypto | grep ddl) CRYPTO_DML_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep crypto | grep dml) # Get the uniqueness users associated with this holding id UNIQ_DDL_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep uniq | grep ddl) UNIQ_DML_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep uniq | grep dml) # Update priviledges for any new tables in the crypto schema with the crypto users psql -h localhost -c "GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA $CRYPTO_SCHEMA TO $CRYPTO_DDL_USER" -p 5432 -d cordacluster -U postgres psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA $CRYPTO_SCHEMA TO $CRYPTO_DML_USER" -p 5432 -d cordacluster -U postgres # Update priviledges for any new tables in the vault schema with the vault users psql -h localhost -c "GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA $VAULT_SCHEMA TO $VAULT_DDL_USER" -p 5432 -d cordacluster -U postgres psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA $VAULT_SCHEMA TO $VAULT_DML_USER" -p 5432 -d cordacluster -U postgres # Update priviledges for any new tables in the uniqueness schema with the uniqueness users psql -h localhost -c "GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA $UNIQ_SCHEMA TO $UNIQ_DDL_USER" -p 5432 -d cordacluster -U postgres psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA $UNIQ_SCHEMA TO $UNIQ_DML_USER" -p 5432 -d cordacluster -U postgres done <./sql_updates/holdingIds
Update Kafka Topics
Corda 5.2 contains new Kafka topics and also revised Kafka ACLs. You can apply these changes in one of the following ways:
- Automatically to a running Kafka deployment using the Corda CLI.
- Manually by reviewing a preview of the required Kafka topic configuration.
Corda CLI Kafka Updates
Use the connect
and create
sub-commands of the Corda CLI topic
command to connect to the Kafka broker and create any required topics. For example:
corda-cli.sh topic -b=prereqs-kafka:9092 -k=/kafka_config/props.txt create connect
corda-cli.cmd topic -b=prereqs-kafka:9092 -k=/kafka_config/props.txt create connect
Manual Kafka Updates
Alternatively, the preview
and create
sub-commands of the Corda CLI topic
command can generate a preview of the required Kafka configuration in YAML. You can save, and if required modify, this content before using the Corda CLI to execute it, as follows:
Use the
preview
sub-command of the Corda CLIcreate
sub-command to generate a preview of the configuration. For example:corda-cli.sh topic create -u crypto=CRYPTO_USER -u db=DB_USER -u flow=FLOW_USER -u membership=MEMBERSHIP_USER \ -u p2pGateway=P2P_GATEWAY_USER -u p2pLinkManager=P2P_LINK_MANAGER_USER -u rest=REST_USER \ -u uniqueness=UNIQUENESS_WORKER -u flowMapper=FLOW_MAPPER_USER -u persistence=PERSISTENCE_USER \ -u verification=VERIFICATION_WORKER preview
corda-cli.cmd topic create -u crypto=CRYPTO_USER -u db=DB_USER -u flow=FLOW_USER -u membership=MEMBERSHIP_USER ` -u p2pGateway=P2P_GATEWAY_USER -u p2pLinkManager=P2P_LINK_MANAGER_USER -u rest=REST_USER ` -u uniqueness=UNIQUENESS_WORKER -u flowMapper=FLOW_MAPPER_USER -u persistence=PERSISTENCE_USER ` -u verification=VERIFICATION_WORKER preview
Review the output and make any necessary changes.
The YAML generated by the Corda CLI represents the required state of Kafka topics for Corda 5.2. The Corda CLI does not connect to any running Kafka instance and so the Kafka instance administrator must use the preview to decide the required changes for your cluster.
R3 recommends that you do not delete old topics or ACLs until the upgrade to Corda 5.2 is complete. While these old topics, remain you can still perform an upgrade rollback.
Launch the Corda 5.2 Workers
To complete the upgrade to 5.2 and launch the Corda 5.2 workers, upgrade the Helm chart, ensuring <YOUR_VALUES_YAML> is a YAML file containing only 5.2 schema compatible values:
helm upgrade corda -n <corda_namespace> oci://corda-os-docker.software.r3.com/helm-charts/release/os/5.2/corda --version 5.2.0 -f <YOUR_VALUES_YAML>
For more information about the values in the deployment YAML file, see Configure the Deployment.
Upload the Corda 5.2 CPIs to virtual nodes
For each major Corda version change, you must upgrade your virtual nodes to ensure they are using the latest version’s CPIs. To do that, follow the steps described in Upgrading a CPI.
Was this page helpful?
Thanks for your feedback!
Chat with us
Chat with us on our #docs channel on slack. You can also join a lot of other slack channels there and have access to 1-on-1 communication with members of the R3 team and the online community.
Propose documentation improvements directly
Help us to improve the docs by contributing directly. It's simple - just fork this repository and raise a PR of your own - R3's Technical Writers will review it and apply the relevant suggestions.
We're sorry this page wasn't helpful. Let us know how we can make it better!
Chat with us
Chat with us on our #docs channel on slack. You can also join a lot of other slack channels there and have access to 1-on-1 communication with members of the R3 team and the online community.
Create an issue
Create a new GitHub issue in this repository - submit technical feedback, draw attention to a potential documentation bug, or share ideas for improvement and general feedback.
Propose documentation improvements directly
Help us to improve the docs by contributing directly. It's simple - just fork this repository and raise a PR of your own - R3's Technical Writers will review it and apply the relevant suggestions.