Stream a Changefeed to Snowflake

On this page Carat arrow pointing down

While CockroachDB is an excellent system of record, it also needs to coexist with other systems. For example, you might want to keep your data mirrored in full-text indexes, analytics engines, or big data pipelines.

This page walks you through a demonstration of how to use an Enterprise changefeed to stream row-level changes to Snowflake, an online analytical processing (OLAP) database.

Note:

Snowflake is optimized for INSERTs and batch rewrites over streaming updates. This means that CockroachDB changefeeds are unable to send UPDATEs and DELETEs to Snowflake. If this is necessary, additional setup (not covered in this tutorial) can allow entire tables to be replaced in batch.

Before you begin

Before you begin, make sure you have:

Step 1. Create a cluster

If you have not done so already, create a cluster.

Step 2. Configure your cluster

  1. Connect to the built-in SQL shell as a user with Admin privileges, replacing the placeholders in the client connection string with the correct username, password, and path to the ca.cert:

    icon/buttons/copy
    $ cockroach sql \
    --url='postgres://<username>:<password>@<global host>:26257?sslmode=verify-full&sslrootcert=certs/ca.crt'
    
    Note:

    If you haven't connected to your cluster before, see Connect to your CockroachDB Dedicated Cluster or Connect to your CockroachDB Serverless Cluster for information on how to initially connect.

  2. Enable rangefeeds:

    icon/buttons/copy
    > SET CLUSTER SETTING kv.rangefeed.enabled = true;
    
    SET CLUSTER SETTING
    

Step 3. Create a database

  1. In the built-in SQL shell, create a database called cdc_test:

    icon/buttons/copy
    > CREATE DATABASE cdc_test;
    
    CREATE DATABASE
    
  2. Set it as the default:

    icon/buttons/copy
    > SET DATABASE = cdc_test;
    
    SET
    

Step 4. Create tables

Before you can start a changefeed, you need to create at least one table for the changefeed to target. The targeted table's rows are referred to as the "watched rows".

Let's create a table called order_alerts to target:

icon/buttons/copy
> CREATE TABLE order_alerts (
    id   INT PRIMARY KEY,
    name STRING
);
CREATE TABLE

Step 5. Create an S3 bucket in the AWS Console

Every change to a watched row is emitted as a record in a configurable format (i.e., JSON for cloud storage sinks). To configure an AWS S3 bucket as the cloud storage sink:

  1. Log in to your AWS S3 Console.

  2. Create an S3 bucket, called changefeed-example, where streaming updates from the watched tables will be collected.

    The name of the S3 bucket is needed when you create your changefeed. Be sure to have a set of IAM credentials with write access on the S3 bucket that will be used during changefeed setup.

Step 6. Create an enterprise changefeed

Back in the built-in SQL shell, create an enterprise changefeed:

icon/buttons/copy
> CREATE CHANGEFEED FOR TABLE order_alerts
    INTO 's3://changefeed-example?AWS_ACCESS_KEY_ID={access key ID}&AWS_SECRET_ACCESS_KEY={secret access key}'
    WITH
        updated;
        job_id
+--------------------+
  000000000000000000
(1 row)

Be sure to replace the placeholders with your AWS access key ID and AWS secret access key. See Cloud Storage Authentication for more detail on authenticating to Amazon S3.

Note:

If your changefeed is running but data is not displaying in your S3 bucket, you might have to debug your changefeed.

Step 7. Insert data into the tables

  1. In the built-in SQL shell, insert data into the order_alerts table that the changefeed is targeting:

    icon/buttons/copy
    > INSERT INTO order_alerts
        VALUES
            (1, 'Order received'),
            (2, 'Order processed');
    
    INSERT 2
    
  2. Navigate back to the S3 bucket to confirm that the data is now streaming to the bucket. A new directory should display on the Overview tab.

    Note:

    If your changefeed is running but data is not displaying in your S3 bucket, you might have to debug your changefeed.

Step 8. Configure Snowflake

  1. Log in to Snowflake as a user with read and write access to a cluster.

  2. Navigate to the Worksheet view.

  3. Create a table to store the data to be ingested:

    icon/buttons/copy
    > CREATE TABLE order_alerts (
       changefeed_record VARIANT
      );
    

    This will store all of the data in a single VARIANT column as JSON. You can then access this field with valid JSON and query the column as if it were a table.

  4. Run the statement.

  5. In the Worksheet, create a stage called cdc-stage, which tells Snowflake where your data files reside in S3:

    icon/buttons/copy
    > CREATE STAGE cdc_stage url='s3://changefeed-example/' credentials=(aws_key_id='<KEY>' aws_secret_key='<SECRET_KEY>') file_format = (type = json);
    

    Be sure to replace the placeholders with your AWS access key ID and AWS secret access key.

  6. In the Worksheet, create a Snowpipe called cdc-pipe, which tells Snowflake to auto-ingest data:

    icon/buttons/copy
    > CREATE PIPE cdc_pipe auto_ingest = TRUE as COPY INTO order_alerts FROM @cdc_stage;
    
    Note:

    Auto-ingest in Snowflake works with AWS, Azure, and Google Cloud Storage.

  7. In the Worksheet, view the Snowpipe:

    icon/buttons/copy
    > SHOW PIPES;
    
  8. Copy the ARN of the SQS queue for your stage (this displays in the notification_channel column). You will use this information to configure the S3 bucket.

Step 9. Configure the S3 bucket

  1. Navigate back to your S3 bucket.

  2. Configure an event notification for the S3 bucket. Use the following parameters:

    • Name: Name of the event notification (e.g., Auto-ingest Snowflake).
    • Events: Select the All object create events.
    • Send to: Select SQS Queue from the drop-down.
    • SQS: Select Add SQS queue ARN from the drop-down.
    • SQS queue ARN: Paste the SQS queue name from the SHOW PIPES output (from Step 8).
  3. Navigate back to Snowflake.

  4. Ingest the data from your stage:

    icon/buttons/copy
    > ALTER PIPE cdc_pipe refresh;
    
  5. To view the data in Snowflake, query the order_alerts table:

    icon/buttons/copy
    > SELECT * FROM order_alerts;
    

    The ingested rows will display in the Results panel. It may take a few minutes for the data to load into the Snowflake cluster. To check on the progress of data being written to the destination table, see Snowflake's documentation on querying a stage.

Your changefeed is now streaming to Snowflake.

Known limitations

  • Snowflake cannot filter streaming updates by table. Because of this, we recommend creating a changefeed that watches only one table.
  • Snowpipe is unaware of CockroachDB resolved timestamps. This means CockroachDB transactions will not be loaded atomically and partial transactions can briefly be returned from Snowflake.
  • Snowpipe works best with append-only workloads, as Snowpipe lacks native ETL capabilities to perform updates to data. You may need to pre-process data before uploading it to Snowflake.

See the Change Data Capture Overview for more general changefeed known limitations.


Yes No
On this page

Yes No