What's New in v24.3

On this page Carat arrow pointing down

CockroachDB v24.3 is a required Regular Release.

Refer to Major release types before installing or upgrading for release timing and support details. To learn what’s new in this release, refer to its Feature Highlights.

On this page, you can read about changes and find downloads for all production and testing releases of CockroachDB v24.3

Get future release notes emailed to you:

v24.3.0

Release Date: November 18, 2024

With the release of CockroachDB v24.3, we've added new capabilities to help you migrate, build, and operate more efficiently. Refer to our summary of the most significant user-facing changes under Feature Highlights.

Downloads

Note:

This version is currently available only for select CockroachDB Cloud clusters. To request to upgrade a CockroachDB self-hosted cluster to this version, contact support.

Changelog

View a detailed changelog on GitHub: v24.3.0-rc.1...v24.3.0

Feature highlights

This section summarizes the most significant user-facing changes in v24.3.0 and other features recently made available to CockroachDB users across versions. For a complete list of features and changes in v24.3, including bug fixes and performance improvements, refer to the release notes for previous v24.3 testing releases. You can also search the docs for sections labeled New in v24.3.

CockroachDB Licensing

Feature Availability
Ver. Self-hosted Advanced Standard Basic

Licensing changes

All versions of CockroachDB starting from the release date of 24.3.0 onward, including patch fixes for versions 23.1-24.2, are made available under the CockroachDB Software License.

See below for a summary of license options for self-hosted deployments. All Cloud deployments automatically have a valid Enterprise license.

  • Enterprise: This paid license allows usage of all CockroachDB features in accordance with the terms specified in the CockroachDB Software License.

  • Enterprise Free: Same functionality as Enterprise, but free of charge for businesses with less than $10M in annual revenue. Clusters will be throttled after 7 days without sending telemetry. License must be renewed annually.

  • Enterprise Trial: A 30 day self-service trial license. Telemetry is required during the trial. Clusters will be throttled after 7 days without sending telemetry. Telemetry can be disabled once the cluster is upgraded to a paid Enterprise license.

See the Licensing FAQs page for more details on the CockroachDB Software License and license options. You may acquire CockroachDB licenses through the CockroachDB Cloud console.

24.3 Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes)

CockroachDB Cloud

Feature Availability
Ver. Self-hosted Advanced Standard Basic

Free trial on CockroachDB Cloud

New CockroachDB Cloud organizations can benefit from a 30-day free trial that enables you to consume up to $400 worth of free credits. Get started by signing up for CockroachDB Cloud

All Gray circle with horizontal white line (No) Gray circle with horizontal white line (No) Green checkmark (Yes) Green checkmark (Yes)

Change Data Capture

Feature Availability
Ver. Self-hosted Advanced Standard Basic

IAM authentication support for Amazon MSK Serverless

Changefeeds support IAM Authentication with Amazon MSK Serverless clusters (Amazon Managed Streaming for Apache Kafka). This feature is generally available.

24.3 Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes)

Disaster Recovery

Feature Availability
Ver. Self-hosted Advanced Standard Basic

SELECT now supported on PCR standby clusters

Physical cluster replication (PCR) has been enhanced to support SELECT operations on standby clusters. This enables you to scale read performance by running, for example, non-critical workloads on standby clusters.

24.3 Green checkmark (Yes) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No)

Logical Data Replication in Preview

Logical data replication (LDR) continuously replicates tables between an active source CockroachDB cluster to an active destination CockroachDB cluster. Both source and destination can receive application reads and writes, and participate in bidirectional LDR replication for eventual consistency in the replicating tables.

The active-active setup between clusters can provide protection against cluster, datacenter, or region failure while still achieving single-region low latency reads and writes in the individual CockroachDB clusters. Each cluster in an LDR job still benefits individually from multi-active availability with CockroachDB's built-in Raft replication providing data consistency across nodes, zones, and regions.

This feature is in Preview.

24.3 Green checkmark (Yes) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No)

SQL

Feature Availability
Ver. Self-hosted Advanced Standard Basic

User-defined functions and stored procedures support SECURITY DEFINER

You can create or alter a user-defined function (UDF) or stored procedure (SP) with [EXTERNAL] SECURITY DEFINER instead of the default [EXTERNAL] SECURITY INVOKER. With SECURITY DEFINER, the privileges of the owner are checked when the UDF or SP is executed, rather than the privileges of the executor. The EXTERNAL keyword is optional and exists for SQL language conformity.

24.3 Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes)

CockroachDB now supports triggers

CockroachDB now supports triggers. Triggers allow automatic execution of specified functions in response to specified events on a particular table or view. They can be used for automating tasks, enforcing business rules, and maintaining data integrity.

24.3 Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes) Green checkmark (Yes)

Security

Feature Availability
Ver. Self-hosted Advanced Standard Basic

LDAP support in Preview

CockroachDB supports authentication and authorization using LDAP-compatible directory services, such as Active Directory and Microsoft Entra ID. This allows you to integrate CockroachDB clusters with your organization's existing identity infrastructure for centralized user management and access control. This feature is available in Preview.

24.3 Green checkmark (Yes) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No)

Observability

Feature Availability
Ver. Self-hosted Advanced Standard Basic

Improved usability for the DB Console Metrics page

Introduced several enhancements to the DB Console Metrics page to support large scale clusters, including the following:

  • Added on-hover cursor support that will display the closest time-series value and highlight the node in the legend to allow users to quickly pinpoint outliers.

  • Improved legend visibillity and made legends scrollable to improve usability and reduce vertical scrolling.

24.3 Green checkmark (Yes) Green checkmark (Yes) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No)

Improved peformance and scalability for the DB Console Databases pages

CockroachDB now caches the data that is surfaced in the Databases page. This enhances the performance and scalability of the Databases page for large-scale clusters.

24.3 Green checkmark (Yes) Green checkmark (Yes) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No)

Improved admission control observability

The DB Console Overload page now provides additional metrics to help identify overload in the system. Graphs and metrics on this page provide quick signals on which resource is exhausted and whether it is due to background activity or foreground.

There are now 4 graphs for admission queue delay:

  1. Foreground (regular) CPU work

  2. Store (IO) work

  3. Background (elastic) CPU work

  4. Replication admission control (store overload on replicas)

24.3 Green checkmark (Yes) Green checkmark (Yes) Gray circle with horizontal white line (No) Gray circle with horizontal white line (No)
Feature detail key
Features marked "All★" were recently made available in the CockroachDB Cloud platform. They are available for all supported versions of CockroachDB, under the deployment methods specified in their row under Availability.
★★ Features marked "All★★" were recently made available via tools maintained outside of the CockroachDB binary. They are available to use with all supported versions of CockroachDB, under the deployment methods specified in their row under Availability.
Green checkmark (Yes) Feature is available for this deployment method of CockroachDB as specified in the icon’s column: CockroachDB Self-hosted, CockroachDB Advanced, CockroachDB Standard, or CockroachDB Basic.
Gray circle with horizontal white line (No) Feature is not available for this deployment method of CockroachDB as specified in the icon’s column: CockroachDB Self-hosted, CockroachDB Advanced, CockroachDB Standard, or CockroachDB Basic.

Backward-incompatible changes

Before upgrading to CockroachDB v24.3, be sure to review the following backward-incompatible changes, as well as key cluster setting changes, and adjust your deployment as necessary.

If you plan to upgrade to v24.3 directly from v24.1 and skip v24.2, be sure to also review the v24.2 release notes for backward-incompatible changes from v24.1.

  • Upgrading to v24.3 is blocked if no license is installed, or if a trial/free license is installed with telemetry disabled. #130576

Features that Require Upgrade Finalization

During a major-version upgrade, certain features and performance improvements may not be available until the upgrade is finalized.

  • A cluster must have an Enterprise license or a trial license set before an upgrade to v24.3 can be finalized.
  • New clusters that are initialized for the first time on v24.3, and clusters that are upgraded to v24.3 will now have a zone config defined for the timeseries range if it does not already exist, which specifies the value for gc.ttlseconds, but inherits all other attributes from the zone config for the default range.

Key Cluster Setting Changes

Changes to cluster settings should be reviewed prior to upgrading. New default cluster setting values will be used unless you have manually set a value for a setting. This can be confirmed by running the SQL statement SELECT * FROM system.settings to view the non-default settings.

Settings added
  • goschedstats.always_use_short_sample_period.enabled: when set to true, helps to prevent unnecessary queueing due to CPU admission control by forcing 1ms sampling of runnable queue lengths. The default value is false. #133585

  • kv.range.range_size_hard_cap: allows you to limit how large a range can grow before backpressure is applied. This can help to mitigate against a situation where a range cannot be split, such as when a range is comprised of a single key due to an issue with the schema or workload pattern, or a bug in client application code. The default is 8 GiB, 16 times the default maximum range size. If you have changed the maximum range size, you may need to adjust this cluster setting or reduce the range size. #129450

  • kvadmission.flow_controller.token_reset_epoch: can be used to refill replication admission control v2 tokens. This setting is marked as reserved, as it is not supported for tuning, by default. Use it only after consultation with your account team. #133294

  • kvadmission.store.snapshot_ingest_bandwidth_control.enabled: enables a new Admission Control integration for pacing snapshot ingest traffic based on disk bandwidth. It requires provisioned bandwidth to be set for the store, or the cluster through the setting kvadmission.store.provisioned_bandwidth, for it to take effect. #131243

  • Settings have been added which control the refresh behavior for the cached data in the Databases page of the DB Console:

    • obs.tablemetadatacache.data_valid_duration: the duration for which the data in system.table_metadata is considered valid before a cache reset will occur. Default: 20 minutes.
    • obs.tablemetadatacache.automatic_updates.enabled: whether to automatically update the cache according the validity interval. Default: false.

    #130198

  • server.jwt_authentication.client.timeout: the HTTP client timeout for external calls made during JWT authentication. #127145

  • Partial statistics can now be automatically collected at the extremes of indexes when a certain fraction and minimum number of rows are stale (by default 5% and 100%, respectively). These can be configured with new table storage parameters and cluster settings:

    • sql.stats.automatic_partial_collection.enabled (table parameter sql_stats_automatic_partial_collection_enabled) - both default to false.
    • sql.stats.automatic_partial_collection.min_stale_rows (table parameter sql_stats_automatic_partial_collection_min_stale_rows) - both default to 100.
    • sql.stats.automatic_partial_collection.fraction_stale_rows (table parameter sql_stats_automatic_partial_collection_fraction_stale_rows) - both default to 0.05.

    #93067

  • sql.stats.histogram_buckets.include_most_common_values.enabled: controls whether common values are included in histogram collection for use by the optimizer. When enabled, histogram buckets will represent the most common sampled values as upper bounds. #129378

  • sql.stats.histogram_buckets.max_fraction_most_common_values: controls the fraction of buckets that can be adjusted to include common values. Defaults to 0.1. #129378

  • sql.txn.repeatable_read_isolation.enabled: defaults to false. When set to true, the following statements configure transactions to run under REPEATABLE READ isolation, rather than being automatically interpreted as SERIALIZABLE:

    • BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ
    • SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
    • SET default_transaction_isolation = 'repeatable read'
    • SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL REPEATABLE READ
Settings with changed defaults
  • The default for sql.defaults.large_full_scan_rows is now 0. If a user is using session var values inherited from these settings, when sql.defaults.disallow_full_table_scans.enabled is set to true: all full table scans are now disallowed by default, even full scans on very small tables, but if sql.defaults.large_full_scan_rows is set to a number greater than 0, full scans are allowed if they are estimated to read fewer than that number of rows.

    • Note: All sql.defaults settings are maintained for backward compatibility. We recommend using ALTER ROLE, instead, to set the corresponding session vars for users (in this case, large_full_scan_rows and disallow_full_table_scans). For more information, see the note on the Cluster Settings table.
  • Increased the per-vCPU concurrency limits for KV operations:

    • The default for kv.dist_sender.concurrency_limit (reserved) has changed from 64 per vCPU to 384 per vCPU. (In v24.3, it is possible to estimate the current concurrency level using the new metric distsender.batches.async.in_progress.)
    • The default for kv.streamer.concurrency_limit (reserved) has changed from 8 per vCPU to 96 per vCPU.
    • These are reserved settings, not intended for tuning by customers.
    • When running SHOW CLUSTER SETTING, the displayed setting values will depend on the node's number of vCPUs.
    • Contact Support if the number of distsender.batches.async.throttled requests is persistently greater than zero.

    #131226

  • The default for server.oidc_authentication.client.timeout, which sets the client timeout for external calls made during OIDC authentication, has changed from 30s to 15s.

Settings with changed visibility

The following settings are now marked public after previously being reserved. Reserved settings are not documented and their tuning by customers is not supported.

  • Cluster settings for configuring rate limiting for traffic to cloud storage are now public.

    • These settings have the prefix cloudstorage followed by:
      1. a provider or protocol (azure, gs, s3, http, nodelocal, userfile, or nullsink)
      2. read or write
      3. node_burst_limit or node_rate_limit
    • For example, cloudstorage.s3.write.node_burst_limit. #127207
  • JWT authentication have been made public. #128170

    • server.jwt_authentication.audience
    • server.jwt_authentication.claim
    • server.jwt_authentication.enabled
    • server.jwt_authentication.issuers.custom_ca
    • server.jwt_authentication.jwks
    • server.jwt_authentication.jwks_auto_fetch.enabled
  • Settings with the prefix server.ldap_authentication have been made public with the Preview release of LDAP support:

    • server.ldap_authentication.client.tls_certificate
    • server.ldap_authentication.client.tls_key
    • server.ldap_authentication.domain.custom_ca
Additional cluster setting changes
  • The setting server.host_based_authentication.configuration now supports LDAP configuration, and its value is now redacted for non-admin users when the server.redact_sensitive_settings.enabled is set to true. #131150

  • The settings enterprise.license and diagnostics.reporting.enabled now have additional validation. To disable diagnostics reporting, the cluster must also have a license that is not an Enterprise Trial or Enterprise Free license. Additionally, to set one of these licenses, the cluster must already be submitting diagnostics information. #131097 #132257

  • sql.defaults.vectorize now supports the value 1 (in addition to 0 and 2) to indicate on, to address a bug that could cause new connections to fail after an upgrade with a message referencing an invalid value for parameter "vectorize": "unknown(1)". #133371

  • The description of the setting changefeed.sink_io_workers has been updated to reflect all of the sinks that support the setting: the batching versions of webhook, pubsub, and kafka sinks that are enabled by changefeed.new_<sink type>_sink_enabled. #129946

Deprecations

The following deprecations are announced in v24.3. If you plan to upgrade to v24.3 directly from v24.1 and skip v24.2, be sure to also review the v24.2 release notes for deprecations.

Known limitations

For information about new and unresolved limitations in CockroachDB v24.3, with suggested workarounds where applicable, refer to Known Limitations.

Additional resources

Resource Topic Description
Cockroach University Introduction to Distributed SQL and CockroachDB This course introduces the core concepts behind distributed SQL databases and describes how CockroachDB fits into this landscape. You will learn what differentiates CockroachDB from both legacy SQL and NoSQL databases and how CockroachDB ensures consistent transactions without sacrificing scale and resiliency. You'll learn about CockroachDB's seamless horizontal scalability, distributed transactions with strict ACID guarantees, and high availability and resilience.
Cockroach University Practical First Steps with CockroachDB This course will give you the tools you need to get started with CockroachDB. During the course, you will learn how to spin up a cluster, use the Admin UI to monitor cluster activity, and use SQL shell to solve a set of hands-on exercises.
Cockroach University Enterprise Application Development with CockroachDB This course is the first in a series designed to equip you with best practices for mastering application-level (client-side) transaction management in CockroachDB. We'll dive deep on common differences between CockroachDB and legacy SQL databases and help you sidestep challenges you might encounter when migrating to CockroachDB from Oracle, PostgreSQL, and MySQL.
Cockroach University Building a Highly Resilient Multi-region Database using CockroachDB This course is part of a series introducing solutions to running low-latency, highly resilient applications for data-intensive workloads on CockroachDB. In this course we focus on surviving large-scale infrastructure failures like losing an entire cloud region without losing data during recovery. We'll show you how to use CockroachDB survival goals in a multi-region cluster to implement a highly resilient database that survives node or network failures across multiple regions with zero data loss.
Docs Migration Overview This page summarizes the steps of migrating a database to CockroachDB, which include testing and updating your schema to work with CockroachDB, moving your data into CockroachDB, and testing and updating your application.
Docs Architecture Overview This page provides a starting point for understanding the architecture and design choices that enable CockroachDB's scalability and consistency capabilities.
Docs SQL Feature Support The page summarizes the standard SQL features CockroachDB supports as well as common extensions to the standard.
Docs Change Data Capture Overview This page summarizes CockroachDB's data streaming capabilities. Change data capture (CDC) provides efficient, distributed, row-level changefeeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing.
Docs Backup Architecture This page describes the backup job workflow with a high-level overview, diagrams, and more details on each phase of the job.

v24.3.0-rc.1

Release Date: November 18, 2024

Downloads

Warning:

CockroachDB v24.3.0-rc.1 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v24.3.0-rc.1.linux-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-rc.1.linux-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-rc.1.linux-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-rc.1.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v24.3.0-rc.1.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-rc.1.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-rc.1.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-rc.1.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v24.3.0-rc.1.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v24.3.0-rc.1.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v24.3.0-rc.1

Source tag

To view or download the source code for CockroachDB v24.3.0-rc.1 on Github, visit v24.3.0-rc.1 source tag.

Changelog

View a detailed changelog on GitHub: v24.3.0-beta.3...v24.3.0-rc.1

Security updates

  • All cluster settings that accept strings are now fully redacted when transmitted as part of CockroachDB's diagnostics telemetry. This payload includes a record of modified cluster settings and their values when they are not strings. Customers who previously applied the mitigations in Technical Advisory 133479 can safely set the value of cluster setting server.redact_sensitive_settings.enabled to false and turn on diagnostic reporting via the diagnostics.reporting.enabled cluster setting without leaking sensitive cluster settings values. #134018

SQL language changes

  • Row-level AFTER triggers can now be executed in response to mutations on a table. Row-level AFTER triggers fire after checks and cascades have completed for the query. #133320
  • Cascades can now execute row-level BEFORE triggers. By default, attempting to modify or eliminate the cascading UPDATE or DELETE operation results in a Triggered Data Change Violation error. To bypass this error, you can set the unsafe_allow_triggers_modifying_cascades query option to true. This could result in constraint violations. #134444
  • String constants can now be compared with collated strings. #134086

Operational changes

  • The kvadmission.low_pri_read_elastic_control.enabled cluster setting has been removed, because all bulk requests are now subject to elastic admission control admission by default. #134486
  • The following metrics have been added for Logic Data Replication (LDR):
    • logical_replication.catchup_ranges: the number of source side ranges conducting catchup scans.
    • logical_replication.scanning_ranges: the number source side ranges conducting initial scans.
    • In the DB Console, these metrics may not be accurate if multiple LDR jobs are running. The metrics are accurate when exported from the Prometheus endpoint. #134674
  • The backup and restore syntax update of cockroach workload which was introduced in #134610 #has been reverted. #134645

DB Console changes

  • After finalizing an upgrade to v24.3, an updated version of the Databases page will be available. #134244
  • Users with the CONNECT privilege can now access the Databases page. #134542

Bug fixes

  • Fixed a bug where an LDAP connection would be closed by the server and would not be retried by CockroachDB. #134277
  • Fixed a bug that prevented LDAP authorization from successfully assigning CockroachDB roles to users when the source group name contained periods or hyphens. #134944
  • Fixed a bug introduced in v22.2 that could cause significantly increased query latency while executing queries with index or lookup joins when the ordering needs to be maintained #134367
  • Fixed a bug where UPSERT statements on regional by row tables under non-serializable isolations would not display show uniqueness constraints in EXPLAIN output. Even when not displayed, the constraints were enforced. #134267
  • Fixed a bug where uniqueness constraints constraints enforced with tombstone writes were not shown in the output of EXPLAIN (OPT). #134482
  • Fixed a bug where DISCARD ALL statements were erroneously counted under the sql.ddl.count metric instead of the sql.misc.count metric. #134510
  • Fixed a bug that could cause a backup or restore operation on AWS to fail with a KMS error due to a missing default shared config. #134536
  • Fixed a bug that could prevent a user from running schema change operations on a restored table that was previously apart of a Logic Data Replication (LDR) stream. #134675

Performance improvements

  • The optimizer now generates more efficient query plans involving inverted indexes for queries with a conjunctive filter on the same JSON or ARRAY column. For example:

    icon/buttons/copy
    SELECT * FROM t WHERE j->'a' = '10' AND j->'b' = '20'
    

    #134002

Build changes

v24.3.0-beta.3

Release Date: November 5, 2024

Downloads

Warning:

CockroachDB v24.3.0-beta.3 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v24.3.0-beta.3.linux-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.3.linux-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-beta.3.linux-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.3.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v24.3.0-beta.3.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.3.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-beta.3.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.3.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v24.3.0-beta.3.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v24.3.0-beta.3.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v24.3.0-beta.3

Source tag

To view or download the source code for CockroachDB v24.3.0-beta.3 on Github, visit v24.3.0-beta.3 source tag.

Changelog

View a detailed changelog on GitHub: v24.3.0-beta.2...v24.3.0-beta.3

Security updates

  • Client authentication errors using LDAP now log more details to help with troubleshooting authentication and authorization issues. #133812

SQL changes

  • Physical Cluster Replication reader catalogs now bypass AOST timestamps using the bypass_pcr_reader_catalog_aost session variable, which can be used to modify cluster settings within the reader. #133876

Operational changes

  • Added a timer for inner changefeed sink client flushes. #133288
  • Rows replicated by Logical Data Replication in immediate mode are now considered in the decision to recompute SQL table statistics. #133591
  • The new cluster setting kvadmission.flow_controller.token_reset_epoch can be used to refill replication admission control v2 tokens. This is an advanced setting. Use it only after consultation with your account team. #133294
  • The new cluster setting goschedstats.always_use_short_sample_period.enabled, when set to true, helps to prevent unnecessary queueing due to CPU [admission control](/docs/v24.3/admission-control.htmls. #133585

DB Console changes

  • In Database pages, the Refresh tooltip now includes details about the progress of cache updates and when the job started. #133351

Bug fixes

  • Fixed a bug where changefeed sink) timers were not correctly registered with the metric system. #133288
  • Fixed a bug that could cause new connections to fail with the following error after upgrading: ERROR: invalid value for parameter "vectorize": "unknown(1)" SQLSTATE: 22023 HINT: Available values: off,on,experimental_always. To encounter this bug, the cluster must have:

    1. Run on version v21.1 at some point in the past
    2. Run SET CLUSTER SETTING sql.defaults.vectorize = 'on'; while running v21.1.
    3. Not set sql.defaults.vectorize after upgrading past v21.1 4.
    4. Subsequently upgraded to v24.2.upgraded all the way to v24.2.

    To detect this bug, run the following query:

    icon/buttons/copy
    SELECT * FROM system.settings WHERE name = 'sql.defaults.vectorize
    

    If the command returns 1 instead of on, run the following statement before upgrading.

    icon/buttons/copy
    RESET CLUSTER SETTING sql.defaults.vectorize;
    

    1 is now allowed as a value for this setting, and is equivalent to on. #133371

  • Fixed a bug in v22.2.13+, v23.1.9+, and v23.2 that could cause the internal error interface conversion: coldata.Column is in an edge case. #133762

  • Fixed a bug introduced in v20.1.0 that could cause erroneous NOT NULL constraint violation errors to be logged during UPSERT and INSERT statements with the ON CONFLICT ...DO UPDATE clause that update an existing row and a subset of columns that did not include a NOT NULL column of the table. #133820

  • Fixed a that could cache and reuse a non-reusable query plan, such as a plan for a DDL or SHOW statement, when plan_cache_mode was set to auto or force_generic_plan, which are not the default options. #133073

  • Fixed an unhandled error that could occur while running the command REVOKE ... ON SEQUENCE FROM ... {user} on an object that is not a sequence. #133710

  • Fixed a panic that could occur while running a CREATE TABLE AS statement that included a sequence with an invalid function overload. #133870

v24.3.0-beta.2

Release Date: October 28, 2024

Downloads

Warning:

CockroachDB v24.3.0-beta.2 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v24.3.0-beta.2.linux-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.2.linux-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-beta.2.linux-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.2.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v24.3.0-beta.2.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.2.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-beta.2.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.2.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v24.3.0-beta.2.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v24.3.0-beta.2.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v24.3.0-beta.2

Source tag

To view or download the source code for CockroachDB v24.3.0-beta.2 on Github, visit v24.3.0-beta.2 source tag.

Changelog

View a detailed changelog on GitHub: v24.3.0-beta.1...v24.3.0-beta.2

SQL language changes

  • If a table is the destination of a logical data replication stream, then only schema change statements that are deemed safe are allowed on the table. Safe statements are those that do not result in a rebuild of the primary index and do not create an index on a virtual computed column. #133266

Operational changes

  • The two new metrics sql.crud_query.count and sql.crud_query.started.count measure the number of INSERT/UPDATE/DELETE/SELECT queries executed and started respectively. #133198
  • When creating a logical data replication stream, any user-defined types in the source and destination are now checked for equivalency. This allows for creating a stream that handles user-defined types without needing to use the WITH SKIP SCHEMA CHECK option as long as the stream uses mode = immediate. #133274
  • Logical data replication streams that reference tables with user-defined types can now be created with the mode = immediate option. #133295

DB Console changes

  • The SQL Statements graph on the Overview and SQL dashboard pages in DB Console has been renamed SQL Queries Per Second and now shows Total Queries as a general Queries Per Second (QPS) metric. #133198
  • Due to the inaccuracy of the Range Count column on the Databases page and the cost incurred to fetch the correct range count for every database in a cluster, this data will no longer be visible. This data is still available via a SHOW RANGES query. #133267

Bug fixes

v24.3.0-beta.1

Release Date: October 24, 2024

Downloads

Warning:

CockroachDB v24.3.0-beta.1 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v24.3.0-beta.1.linux-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.1.linux-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-beta.1.linux-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.1.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v24.3.0-beta.1.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.1.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-beta.1.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-beta.1.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v24.3.0-beta.1.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v24.3.0-beta.1.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v24.3.0-beta.1

Source tag

To view or download the source code for CockroachDB v24.3.0-beta.1 on Github, visit v24.3.0-beta.1 source tag.

Changelog

View a detailed changelog on GitHub: v24.3.0-alpha.2...v24.3.0-beta.1

General changes

Enterprise edition changes

  • This change ensures authorization with LDAP only works when the ldapgrouplistfilter option is present in the HBA configuration, otherwise authentication will proceed with the provided LDAP auth method options in the HBA configuration. This change is to ensure external authorization with LDAP is opt-in rather than enabled by default. #132235
  • Added a changefeed sink error metric changefeed.sink_errors, and expanded reporting of the internal retries metric changefeed.internal_retry_message_count to all sinks that perform internal retries. #132092

SQL language changes

  • Implemented DROP TRIGGER statements. The CASCADE option for dropping a trigger is not supported. #128540
  • Added support for CREATE TRIGGER. The OR REPLACE syntax is not supported. Also, triggers cannot be executed, so creation is a no-op. #128540
  • REGIONAL BY ROW and PARTITION ALL BY tables can now be inserted into under non-SERIALIZABLE isolation levels as long as there is no ON CONFLICT clause in the statement. Also, REGIONAL BY ROW and PARTITION ALL BY tables can now be updated under non-SERIALIZABLE isolation levels. #129837
  • Attempting to add foreign keys referencing a table with row-level TTL enabled will generate a notice informing the user about potential impact on the row-level TTL deletion job. Similarly, a notice is generated while attempting to enable row-level TTL on a table that has inbound foreign key references. #127935
  • It is now possible to assign to an element of a composite typed variable in PL/pgSQL. For example, given a variable foo with two integer elements x and y, the following assignment statement is allowed: foo.x := 100;. #132628
  • Backup and restore now work for tables with triggers. When the skip_missing_udfs option is applied, triggers with missing trigger functions are removed from the table. #128555
  • UPSERT and INSERT ... ON CONFLICT statements are now supported on REGIONAL BY ROW tables under READ COMMITTED isolation. #132768
  • Added support for row-level BEFORE triggers. A row-level trigger executes the trigger function for each row that is being mutated. BEFORE triggers fire before the mutation operation. #132511
  • Added support for PL/pgSQL integer FOR loops, which iterate over a range of integer values. #130211

Operational changes

  • Admission Control now has an integration for pacing snapshot ingest traffic based on disk bandwidth. kvadmission.store.snapshot_ingest_bandwidth_control.enabled is used to turn on this integration. It requires provisioned bandwidth to be set for the store (or cluster through the cluster setting) for it to take effect. #131243
  • Added validation to check whether audit logging and buffering configurations are both present in the file log sink. Audit logging and buffering configuration should not both exist in the file log sink. #132742
  • Updated the file log sink validation message. This would give clear indication to the user about the expected valid configuration. #132899

DB Console changes

Bug fixes

  • Addressed a rare bug that could prevent backups taken during a DROP COLUMN operation with a sequence owner from restoring with the error: rewriting descriptor ids: missing rewrite for <id> in SequenceOwner.... #132202
  • Fixed a bug existing since before v23.1 that could lead to incorrect results in rare cases. The bug requires a join between two tables with an equality between columns with equivalent, but not identical types (e.g., OID and REGCLASS). In addition, the join must lookup into an index that includes a computed column that references one of the equivalent columns. #126345
  • Fixed a bug existing since before v23.1 that could lead to incorrect results in rare cases. The bug requires a lookup join into a table with a computed index column, where the computed column expression is composite sensitive. A composite sensitive expression can compare differently if supplied non-identical but equivalent input values (e.g., 2.0::DECIMAL versus 2.00::DECIMAL). #126345
  • Fixed a bug that caused quotes around the name of a routine to be dropped when it was called within another routine. This could prevent the correct routine from being resolved if the nested routine name was case-sensitive. The bug has existed since v24.1 when nested routines were introduced. #131643
  • Fixed a bug where the SQL shell would print out the previous error message when executing the quit command. #130736
  • Fixed a bug where a span statistics request on a mixed-version cluster resulted in a null pointer exception. #132349
  • Fixed an issue where changefeeds would fail to update protected timestamp records in the face of retryable errors. #132712
  • The franz-go library has been updated to fix a potential deadlock on changefeed restarts. #132761
  • Fixed a bug that in rare cases could cause incorrect evaluation of scalar expressions involving NULL values. #132261
  • Fixed a bug in the query optimizer that in rare cases could cause CockroachDB nodes to crash. The bug could occur when a query contains a filter in the form col IN (elem0, elem1, ..., elemN) only when N is very large, (e.g., 1.6+ million), and when col exists in a hash-sharded index, or exists a table with an indexed, computed column dependent on col. #132701
  • The proretset column of the pg_catalog.pg_proc table is now properly set to true for set-returning built-in functions. #132853
  • Fixed an error that could be caused by using an AS OF SYSTEM TIME expression that references a user-defined (or unknown) type name. These kinds of expressions are invalid, but previously the error was not handled properly. Now, a correct error message is returned. #132348

Build changes

v24.3.0-alpha.2

Release Date: October 14, 2024

Downloads

Warning:

CockroachDB v24.3.0-alpha.2 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v24.3.0-alpha.2.linux-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.2.linux-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-alpha.2.linux-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.2.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v24.3.0-alpha.2.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.2.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-alpha.2.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.2.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v24.3.0-alpha.2.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v24.3.0-alpha.2.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v24.3.0-alpha.2

Source tag

To view or download the source code for CockroachDB v24.3.0-alpha.2 on Github, visit v24.3.0-alpha.2 source tag.

Changelog

View a detailed changelog on GitHub: v24.3.0-alpha.1...v24.3.0-alpha.2

Security updates

  • The parameters for an HBA config entry for LDAP are now validated when the entry is created or amended, in addition to the validation that happens during an authentication attempt. #132086

  • Added automatic cleanup and validation for default privileges that reference dropped roles after a major-version upgrade to v24.3. #131782

General changes

  • Changed the license cockroach is distributed under to the new CockroachDB Software License (CSL). #131799 #131794 #131793

Enterprise edition changes

SQL language changes

  • To view comments on a type, you can use the new SHOW TYPES WITH COMMENT command. Comments can be added using COMMENT ON. #131183
  • You can create or alter a user-defined function (UDF) or stored procedure (SP) with [EXTERNAL] SECURITY DEFINER instead of the default [EXTERNAL] SECURITY INVOKER. With SECURITY DEFINER, the privileges of the owner are checked when the UDF or SP is executed, rather than the privileges of the executor. The EXTERNAL keyword is optional and exists for SQL language conformity. #129720

Operational changes

  • The following new metrics show details about replication flow control send queue when the cluster setting kvadmission.flow_control.enabled is set to true and the cluster setting kvadmission.flow_control.mode is set to apply_to_all.

    • kvflowcontrol.tokens.send.regular.deducted.prevent_send_queue
    • kvflowcontrol.tokens.send.elastic.deducted.prevent_send_queue
    • kvflowcontrol.tokens.send.elastic.deducted.force_flush_send_queue
    • kvflowcontrol.range_controller.count
    • kvflowcontrol.send_queue.bytes
    • kvflowcontrol.send_queue.count
    • kvflowcontrol.send_queue.prevent.count
    • kvflowcontrol.send_queue.scheduled.deducted_bytes
    • kvflowcontrol.send_queue.scheduled.force_flush

    #131857

  • The following metrics have been renamed:

    Previous name New name-
    kvflowcontrol.tokens.eval.regular.disconnected kvflowcontrol.tokens.eval.regular.returned.disconnect
    kvflowcontrol.tokens.eval.elastic.disconnected kvflowcontrol.tokens.eval.elastic.returned.disconnect
    kvflowcontrol.tokens.send.regular.disconnected kvflowcontrol.tokens.send.regular.returned.disconnect
    kvflowcontrol.tokens.send.elastic.disconnected kvflowcontrol.tokens.send.elastic.returned.disconnect

    #131857

Cluster virtualization changes

  • The _status/ranges/ endpoint on DB Console Advanced debug pages is now enabled for non-system virtual clusters, where it returns the ranges only for the tenant you are logged into. For the system virtual cluster, the _status/ranges/ endpoint continues to return ranges for the specified node across all virtual clusters. #131100

DB Console changes

  • Improved performance in the Databases, Tables View, and Table Details sections of the Databases page #131769

Bug fixes

  • Fixed a bug where JSON values returned by cockroach commands using the --format=sql flag were not correctly escaped if they contained double quotes within a string. #131881
  • Fixed an error that could happen if an aggregate function was used as the value in a SET command. #131891
  • Fixed a rare bug introduced in v22.2 in which an update of a primary key column could fail to update the primary index if it is also the only column in a separate column family. #131869
  • Fixed a rare bug where dropping a column of FLOAT4, FLOAT8, DECIMAL, JSON, ARRAY, or collate STRING type stored in a single column family could prevent subsequent reading of the table if the column family was not the first column family. #131967
  • Fixed an unimplemented internal error that could occur when ordering by a VECTOR column. #131703

Performance improvements

  • Efficiency has been improved when writing string-like values over the PostgreSQL wire protocol. #131964
  • Error handling during periodic table history polling has been improved when the schema_locked table parameter is not used. #131951

v24.3.0-alpha.1

Release Date: October 9, 2024

Downloads

Warning:

CockroachDB v24.3.0-alpha.1 is a testing release. Testing releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.

Note:

Experimental downloads are not qualified for production use and not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.

Operating System Architecture Full executable SQL-only executable
Linux Intel cockroach-v24.3.0-alpha.1.linux-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.1.linux-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-alpha.1.linux-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.1.linux-arm64.tgz
(SHA256)
Mac
(Experimental)
Intel cockroach-v24.3.0-alpha.1.darwin-10.9-amd64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.1.darwin-10.9-amd64.tgz
(SHA256)
ARM cockroach-v24.3.0-alpha.1.darwin-11.0-arm64.tgz
(SHA256)
cockroach-sql-v24.3.0-alpha.1.darwin-11.0-arm64.tgz
(SHA256)
Windows
(Experimental)
Intel cockroach-v24.3.0-alpha.1.windows-6.2-amd64.zip
(SHA256)
cockroach-sql-v24.3.0-alpha.1.windows-6.2-amd64.zip
(SHA256)

Docker image

Multi-platform images include support for both Intel and ARM. Multi-platform images do not take up additional space on your Docker host.

Within the multi-platform image, both Intel and ARM images are generally available for production use.

To download the Docker image:

icon/buttons/copy

docker pull cockroachdb/cockroach-unstable:v24.3.0-alpha.1

Source tag

To view or download the source code for CockroachDB v24.3.0-alpha.1 on Github, visit v24.3.0-alpha.1 source tag.

Security updates

  • URLs in the CREATE CHANGEFEED and CREATE SCHEDULE FOR CHANGEFEED SQL statements are now sanitized of any secrets before being written to unredacted logs. #126970
  • The LDAP cluster settings server.ldap_authentication.client.tls_certificate and server.ldap_authentication.client.tls_key did not have callbacks installed to reload the settings value for LDAP authManager. This change fixes this by adding the necessary callbacks. #131151
  • Cluster settings for host-based authentication configuration (server.host_based_authentication.configuration) and identity map configuration (server.identity_map.configuration) need to be redacted as they can be configured to contain LDAP bind usernames, passwords, and mapping of external identities to SQL users that are sensitive. These cluster settings can be configured for redaction via the server.redact_sensitive_settings.enabled cluster setting. #131150
  • Added support for configuring authorization using LDAP. During login, the list of groups that a user belongs to are fetched from the LDAP server. These groups are mapped to SQL roles by extracting the common name (CN) from the group. After authenticating the user, the login flow grants these roles to the user, and revokes any other roles that are not returned by the LDAP server. The groups given by the LDAP server are treated as the sole source of truth for role memberships, so any roles that were manually granted to the user will not remain in place. #131043
  • Previously, the host-based authentication (HBA) configuration cluster setting server.host_based_authentication.configuration was unable to handle double quotes in authentication method option values. For example, for the following entry:

    host all all all ldap ldapserver=ldap.example.com ldapport=636 ldapbasedn="ou=users,dc=example,dc=com" ldapbinddn="cn=readonly,dc=example,dc=com" ldapbindpasswd=readonly_password ldapsearchattribute=uid ldapsearchfilter="(memberof=cn=cockroachdb_users,ou=groups,dc=example,dc=com)"
    

    The HBA parser would fail after incorrectly determining ldapbinddn="cn=readonly,dc=example,dc=com" as 2 separate options (ldapbinddn=and cn=readonly,dc=example,dc=com). Now, the 2 tokens are set as key and value respectively for the same HBA configuration option. #131480

General changes

Enterprise edition changes

  • Added a CompressionLevel field to the changefeed kafka_sink_config option. Changefeeds will use this compression level when emitting events to a Kafka sink. The possible values depend on a compression codec. The CompressionLevel field optimizes for faster or stronger level of compression. #125456
  • The updated version of the CockroachDB changefeed Kafka sink implementation now supports specifying compression levels. #127827
  • Introduced the cluster setting server.jwt_authentication.client.timeout to capture the HTTP client timeout for external calls made during JWT authentication. #127145
  • The JWT authentication cluster settings have been made public. #128170
  • Updated certain error messages to refer to the stable docs tree rather than an explicit version. #128842
  • Disambiguated metrics and logs for the two buffers used by the KV feed. The affected metrics now have a suffix indicating which buffer they correspond to: changefeed.buffer_entries.*, changefeed.buffer_entries_mem.*, changefeed.buffer_pushback_nanos.*. The previous versions are still supported for backward compatibility, though using the new format is recommended. #128813
  • Added support for authorization to a CockroachDB cluster via LDAP, retrieving AD groups membership information for LDAP user. The new HBA configuration cluster setting option ldapgrouplistfilter performs filtered search query on LDAP for matching groups. An example HBA configuration entry to support LDAP authZ configuration:

    icon/buttons/copy
    # TYPE    DATABASE      USER           ADDRESS             METHOD             OPTIONS
    # Allow all users to connect to using LDAP authentication with search and bind    host    all           all            all                 ldap               ldapserver=ldap.example.com ldapport=636 "ldapbasedn=ou=users,dc=example,dc=com" "ldapbinddn=cn=readonly,dc=example,dc=com" ldapbindpasswd=readonly_password ldapsearchattribute=uid "ldapsearchfilter=(memberof=cn=cockroachdb_users,ou=groups,dc=example,dc=com)" "ldapgrouplistfilter=(objectClass=groupOfNames)"
    # Fallback to password authentication for the root user
    host    all           root           0.0.0.0/0          password
    

    For example, to use for an Azure AD server:

    icon/buttons/copy
    SET cluster setting server.host_based_authentication.configuration = 'host    all           all            all                 ldap ldapserver=azure.dev ldapport=636 "ldapbasedn=OU=AADDC Users,DC=azure,DC=dev" "ldapbinddn=CN=Some User,OU=AADDC Users,DC=azure,DC=dev" ldapbindpasswd=my_pwd ldapsearchattribute=sAMAccountName "ldapsearchfilter=(memberOf=CN=azure-dev-domain-sync-users,OU=AADDC Users,DC=crlcloud,DC=dev)" "ldapgrouplistfilter=(objectCategory=CN=Group,CN=Schema,CN=Configuration,DC=crlcloud,DC=dev)"
    host    all           root           0.0.0.0/0          password';
    

    Post configuration, the CockroachDB cluster should be able to authorize users via LDAP server if:

    1. Users LDAP authentication attempt is successful, and it has the user's DN for the LDAP server.
    2. ldapgrouplistfilter is properly configured, and it successfully syncs groups of the user. #128498
  • Added changefeed support for the mvcc_timestamp option when the changefeed is emitting in avro format. If both options are specified, the Avro schema includes an mvcc_timestamp metadata field and emits the row's MVCC timestamp with the row data. #129840

  • Updated the cluster setting changefeed.sink_io_workers with all the sinks that support the setting. #129946

  • Added a LDAP authentication method to complement password-based login for the DB Console if HBA configuration has an entry for LDAP for the user attempting login, along with other matching criteria (like the requests originating IP address) for authentication to the DB Console. #130418

  • Added timers around key parts of the changefeed pipeline to help debug feeds experiencing issues. The changefeed.stage.<stage>.latency metrics now emit latency histograms for each stage. The metric respects the changefeed scope label for debugging specific feeds. #128794

  • For enterprise changefeeds, events changefeed_failed and create_changefeed now include a JobId field. #131396

  • The new metric seconds_until_license_expiry allows you to monitor the status of a cluster's Enterprise license. #129052.

  • Added the changefeed.total_ranges metric, which monitors the number of ranges that are watched by changefeed aggregators. It shares the same polling interval as changefeed.lagging_ranges, which is controlled by the existing lagging_ranges_polling_interval option. #130897

SQL language changes

  • Added a session setting, optimizer_use_merged_partial_statistics which defaults to false. When set to true, it enables usage of existing partial statistics merged with full statistics when optimizing a query. #126948
  • The enable_create_stats_using_extremes session setting is now true by default. Partial statistics at extremes can be collected using the CREATE STATISTICS <stat_name> ON <column_name> FROM <table_name> USING EXTREMES syntax. #127850
  • Added SHOW SCHEMAS WITH COMMENT and SHOW SCHEMAS FROM database_name WITH COMMENT functionality similar to SHOW TABLES and SHOW DATABASES. #127816
  • The deadlock_timeout session variable is now supported. The configuration can be used to specify the time to wait on a lock before pushing the lock holder for deadlock detection. It can be set at session granularity. #128506
  • Partial statistics at extremes can now be collected on all valid columns of a table using the CREATE STATISTICS <stat_name> FROM <table_name> USING EXTREMES syntax, without an ON <col_name> clause. Valid columns are all single column prefixes of a forward index excluding partial, sharded, and implicitly partitioned indexes. #127836
  • Partial statistics can now be automatically collected at the extremes of indexes when a certain fraction and minimum number of rows are stale (by default 5% and 100 respectively). These can be configured with new table storage parameters and cluster settings, and the feature is disabled by default. The new cluster settings and table parameters are:
    • sql.stats.automatic_partial_collection.enabled/sql_stats_automatic_partial_collection_enabled, defaults to false.
    • sql.stats.automatic_partial_collection.min_stale_rows/sql_stats_automatic_partial_collection_min_stale_rows, defaults to 100.
    • sql.stats.automatic_partial_collection.fraction_stale_rows/sql_stats_automatic_partial_collection_fraction_stale_rows, Defaults to 0.05. #93067
  • The session variable enforce_home_region_follower_reads_enabled is now deprecated, and will be removed in a future release. The related session variable enforce_home_region is not deprecated. #129024
  • Added a new cluster setting to control whether most common values are collected as part of histogram collection for use by the optimizer. The setting is called sql.stats.histogram_buckets.include_most_common_values.enabled. When enabled, the histogram collection logic will ensure that the most common sampled values are represented as histogram bucket upper bounds. Since histograms in CockroachDB track the number of elements equal to the upper bound in addition to the number of elements less, this allows the optimizer to identify the most common values in the histogram and better estimate the rows processed by a query plan. To set the number of most common values to include in a histogram, a second setting sql.stats.histogram_buckets.max_fraction_most_common_values was added. Currently, the default is 0.1, or 10% of the number of buckets. With a 200 bucket histogram, by default, at most 20 buckets may be adjusted to include a most common value as the upper bound. #129378
  • Added a new column to crdb_internal.table_spans to indicate whether a table is dropped. Rows for dropped tables will be removed once they are garbage collected. #128788
  • Added the cluster setting sql.txn.repeatable_read_isolation.enabled, which defaults tofalse. When set to true, the following statements will configure transactions to run under REPEATABLE READ isolation, rather than being automatically interpreted as SERIALIZABLE:

    • BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ
    • SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
    • SET default_transaction_isolation = 'repeatable read'
    • SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL REPEATABLE READ

    This setting was added since REPEATABLE READ transactions is a preview feature, so usage of it is opt-in for v24.3. In a future CockroachDB major version, this setting will change to default to true. #130089

  • Previously, SHOW CHANGEFEED JOBS showed the changefeed jobs for the last 14 days by default. Now, it uses the same age filter for SHOW JOBS, which shows jobs from the last 12 hours by default. #127584

  • Set the default for session variable large_full_scan_rows to 0. This means that by default, disallow_full_table_scans will disallow all full table scans, even full scans on very small tables. If large_full_scan_rows is set > 0, disallow_full_table_scans will allow full scans estimated to read fewer than large_full_scan_rows. #131040

  • It is now possible to create PL/pgSQL trigger functions, which can be executed by a trigger in response to table mutation events. Note that this patch does not add support for triggers, only trigger functions. #126734

  • Cluster settings enterprise.license and diagnostics.reporting.enabled now have additional validation. #131097

  • The SHOW SESSIONS command was changed to include an authentication_method column in the result. This column will show the method used to authenticate the session, for example, password, cert, LDAP, etc. #131625

Operational changes

  • Events DiskSlownessDetected and DiskSlownessCleared are now logged when disk slowness is detected and cleared on a store. #127025
  • Several cluster settings allow you to configure rate-limiting traffic to cloud storage over various protocols. These settings begin with cloudstorage. #127207
  • The new cluster setting kv.range.range_size_hard_cap allows you to limit how large a range can grow before backpressure is applied. This can help to mitigate against a situation where a range cannot be split, such as when a range is comprised of a single key due to an issue with the schema or workload pattern or a bug in client application code. The default is 8 GiB, which is 16 times the default max range size. If you have changed the max range size, you may need to adjust this cluster setting or reduce the range size. #129450
  • The following kvflowcontrol metrics have been renamed. After a cluster is finalized on v24.3, old and new metrics will be populated. The previous metrics under kvasdmission.flow_controller will be removed.

    Old metric names New metric names
    kvadmission.flow_controller.regular_tokens_available kvflowcontrol.tokens.eval.regular.available
    kvadmission.flow_controller.elastic_tokens_available kvflowcontrol.tokens.eval.elastic.available
    kvadmission.flow_controller.regular_tokens_deducted kvflowcontrol.tokens.eval.regular.deducted
    kvadmission.flow_controller.elastic_tokens_deducted kvflowcontrol.tokens.eval.elastic.deducted
    kvadmission.flow_controller.regular_tokens_returned kvflowcontrol.tokens.eval.regular.returned
    kvadmission.flow_controller.elastic_tokens_returned kvflowcontrol.tokens.eval.elastic.returned
    kvadmission.flow_controller.regular_tokens_unaccounted kvflowcontrol.tokens.eval.regular.unaccounted
    kvadmission.flow_controller.elastic_tokens_unaccounted kvflowcontrol.tokens.eval.elastic.unaccounted
    kvadmission.flow_controller.regular_stream_count kvflowcontrol.streams.eval.regular.total_count
    kvadmission.flow_controller.elastic_stream_count kvflowcontrol.streams.eval.elastic.total_count
    kvadmission.flow_controller.regular_requests_waiting kvflowcontrol.eval_wait.regular.requests.waiting
    kvadmission.flow_controller.elastic_requests_waiting kvflowcontrol.eval_wait.elastic.requests.waiting
    kvadmission.flow_controller.regular_requests_admitted kvflowcontrol.eval_wait.regular.requests.admitted
    kvadmission.flow_controller.elastic_requests_admitted kvflowcontrol.eval_wait.elastic.requests.admitted
    kvadmission.flow_controller.regular_requests_errored kvflowcontrol.eval_wait.regular.requests.errored
    kvadmission.flow_controller.elastic_requests_errored kvflowcontrol.eval_wait.elastic.requests.errored
    kvadmission.flow_controller.regular_requests_bypassed kvflowcontrol.eval_wait.regular.requests.bypassed
    kvadmission.flow_controller.elastic_requests_bypassed kvflowcontrol.eval_wait.elastic.requests.bypassed
    kvadmission.flow_controller.regular_wait_duration kvflowcontrol.eval_wait.regular.duration
    kvadmission.flow_controller.elastic_wait_duration kvflowcontrol.eval_wait.elastic.duration

    #130167

  • The new ranges.decommissioning metric shows the number of ranges with a replica on a decommissioning node. #130117

  • New cluster settings have been added which control the refresh behavior for the cached data in the Databases page of the DB Console:

    • obs.tablemetadatacache.data_valid_duration: the duration for which the data in system.table_metadata is considered valid before a cache reset will occur. Default: 20 minutes.
    • obs.tablemetadatacache.automatic_updates.enabled: whether to automatically update the cache according the validity interval. Default: false.

    #130198

  • New gauge metrics security.certificate.expiration.{cert-type} and security.certificate.ttl.{cert-type} show the expiration and TTL for a certificate. #130110

  • To set the logging format for stderr, you can now set the format field to any valid format, rather than only crdb-v2-tty. #131529

  • The following new metrics show connection latency for each SQL authentication method:

    Authentication method Metric
    Certificate auth_cert_conn_latency
    Java Web Token (JWT) auth_jwt_conn_latency
    Kerberos GSS auth_gss_conn_latency
    LDAP auth_ldap_conn_latency
    Password auth_password_conn_latency
    SCRAM SHA-256 auth_scram_conn_latency

    #131578

  • Verbose logging of slow Pebble reads can no longer be enabled via the shorthand flag --vmodule=pebble_logger_and_tracer=2, where pebble_logger_and_tracer contains the CockroachDB implementation of the logger needed by Pebble. Instead, you must list the Pebble files that contain the log statements. For example --vmodule=reader=2,table=2. #127066

  • The lowest admission control priority for the storage layer has been renamed from ttl-low-pri to bulk-low-pri. #129564

  • New clusters will now have a zone configuration defined for the timeseries range, which specifies gc.ttlseconds and inherits all other attributes from the zone config of the default range. This zone config will also be added to a cluster that is upgraded to v24.3 if it does not already have a zone config defined.#128032

Command-line changes

DB Console changes

  • If a range is larger than twice the max range size, it will now display in the Problem Ranges page in the DB Console. #129001
  • Updated some metric charts on the Overview and Replication dashboards to omit verbose details in the legends for easier browsing. #129149
  • Updated the icon for notification alerts to use the new CockroachDB logo. #130333
  • The txn.restarts.writetoooldmulti metric was rolled into the txn.restarts.writetooold metric in the v24.1.0-alpha.1 release. txn.restarts.writetoooldmulti has now been removed altogether. #131642
  • The grants table in the DB Details page will now show the database level grants. For example, when clicking a database in the databases list. Previously, it showed grants per table in the database. #131250
  • Added new database pages that are available from the side navigation Databases link. #131594
  • The DB Console will reflect any throttling behavior from the cluster due to an expired license or missing telemetry data. Enterprise licenses are not affected. #131326
  • Users can hover over the node/region cell in multi-region deployments to view a list of nodes the database or table is on. #130704
  • The Databases pages in the DB console have been updated to read cached metadata about database and table storage statistics. The cache update time is now displayed in the top right-hand corner of the database and tables list pages. Users may trigger a cache refresh with the refresh icon next to the last updated time. The cache will also update automatically when users visit a Databases page and the cache is older than or equal to 20 minutes. #131463

Bug fixes

  • Fixed a bug where CockroachDB could incorrectly evaluate an IS NOT NULL filter if it was applied to non-NULL tuples that had NULL elements (like (1, NULL) or (NULL, NULL)). The bug was present since v20.2. #126901
  • Fixed a bug related to displaying the names of composite types in the SHOW CREATE TABLES command. The names are now shown as two-part names, which disambiguates the output and makes it more portable to other databases. #127158
  • The CONCAT() built-in function now accepts arguments of any data type. #127098
  • Fixed a bug that prevented merged statistics from being created after injecting statistics or recreating statement bundles. This would occur when the injected statistics or statement bundle contained related full and partial statistics. #127252
  • Fixed a bug where CockroachDB could encounter spurious (error encountered after some results were delivered) ERROR: context canceled errors in rare cases when evaluating some queries. The bug was present since v22.2. The conditions that triggered the bug were queries that:
    • Had to be executed locally.
    • Had a LIMIT.
    • Have at least two UNION clauses.
    • Have some lookup or index joins in the UNION branches. #127076
  • Updated the restore job description from RESTORE ... FROM to RESTORE FROM {backup} IN {collectionURI} to reflect the new RESTORE syntax. #127970
  • Fixed a bug that could cause a CASE statement with multiple subqueries to produces the side effects of one of the subqueries even if that subquery shouldn't have been evaluated. #120327
  • Changed the schema changer’s merge process so that it can detect contention errors and automatically retry with a smaller batch size. This makes the merge process more likely to succeed without needing to manually tune settings. #128201
  • SHOW CREATE ALL TYPES now shows corresponding type comments in its output. #128084
  • Enforce the statement_timeout session setting when waiting for jobs after a schema change in an implicit transaction. #128474
  • Fixed a bug where certain dropdowns in the DB Console appeared to be empty (with no options to select from) for users of the Safari browser. #128996
  • Fixed a bug that would cause the hlc_to_timestamp function to return an incorrect timestamp for some input decimals. #129153
  • Fixed a memory leak where statement insight objects could leak if the session was closed without the transaction finishing. #128400
  • Fixed a bug in the public preview WAL failover feature that could prevent a node from starting if it crashed during a failover. #129331
  • Fixed a bug where 'infinity'::TIMESTAMP returned a different result than PostgreSQL. #127141
  • Fixed a spurious error log from the replication queue involving the text " needs lease, not adding". #129351
  • Using more than one DECLARE statement in the definition of a user-defined function now correctly declares additional variables. #129951
  • Fixed a bug in which some SELECT FOR UPDATE or SELECT FOR SHARE queries using NOWAIT could still block on locked rows when using the optimizer_use_lock_op_for_serializable session setting under serializable isolation. This bug was introduced with optimizer_use_lock_op_for_serializable in v23.2.0. #130103
  • Fixed a bug in the upgrade pre-condition for repairing descriptor corruption that could lead to finalization being stuck. #130064
  • Fixed a bug that caused the optimizer to plan unnecessary post-query uniqueness checks during INSERT, UPSERT, and UPDATE statements on tables with partial, unique, hash-sharded indexes. These unnecessary checks added overhead to execution of these statements, and caused the statements to error when executed under READ COMMITTED isolation. #130366
  • Fixed a bug that caused incorrect evaluation of CASE, COALESCE, and IF expressions with branches producing fixed-width string-like types, such as CHAR. In addition, the BPCHAR type no longer incorrectly imposes a length limit of 1. #129007
  • Fixed a bug where zone configuration changes issued by the declarative schema changer were not blocked if a table had the schema_locked storage parameter set. #130670
  • Fixed a bug that could prevent a CHANGEFEED from being able to resume after being paused for a prolonged period of time. #130622
  • Fixed a bug where if a client connection was attempting a schema change while the same schema objects were being dropped, it was possible for the connection to be incorrectly dropped. #130928
  • Fixed a bug introduced in v23.1 that could cause incorrect results when:
    1. The query contained a correlated subquery.
    2. The correlated subquery had a GROUP BY or DISTINCT operator with an outer-column reference in its input.
    3. The correlated subquery was in the input of a SELECT or JOIN operator.
    4. The SELECT or JOIN had a filter that set the outer-column reference from (2) equal to a non-outer column in the input of the grouping operator.
    5. The grouping column set did not include the replacement column, and functionally determined the replacement column. #130925
  • Fixed a bug which could cause errors with the message "internal error: Non-nullable column ..." when executing statements under READ COMMITTED isolation that involved tables with NOT NULL virtual columns. #130725
  • Fixed a bug that could cause a very rare internal error "lists in SetPrivate are not all the same length" when executing queries. #130981
  • Fixed a bug that could cause incorrect evaluation of scalar expressions involving NULL values in rare cases. #128123
  • SHOW CREATE ALL SCHEMAS now shows corresponding schema comments in its output. #130164
  • Fixed a bug, introduced in v23.2.0, where creating a new incremental schedule (using ALTER BACKUP SCHEDULE) on a full backup schedule created on an older version would fail. #131231
  • Fixed a bug that could cause an internal error if a table with an implicit (rowid) primary key was locked from within a subquery like SELECT * FROM (SELECT * FROM foo WHERE x = 2) FOR UPDATE;. The error could occur either under READ COMMITTED isolation, or with the optimizer_use_lock_op_for_serializable session setting enabled. #129768
  • Fixed a bug where jobs created in a session with non-zero session timezone offsets could hang before starting, or report incorrect creation times when viewed in SHOW JOBS and the DB Console. #123632
  • Fixed a bug which could result in changefeeds using CDC queries failing due to a system table being garbage collected. #131027
  • ALTER COLUMN TYPE now errors out when there is a partial index that is dependent on the column being altered. #131590

Performance improvements

Build changes

  • Changed the AWS SDK version used for interactions with external storage from v1 to v2. #129938

Yes No
On this page

Yes No