diff --git a/documentation/concept/array.md b/documentation/concept/array.md index 0c5edb00..6ecb5393 100644 --- a/documentation/concept/array.md +++ b/documentation/concept/array.md @@ -103,21 +103,6 @@ QuestDB always stores arrays in vanilla form. If you transform an array's shape and then store it to the database, QuestDB will physically rearrange the elements, and store the new array in vanilla shape. -## Array and NULL/Nan/Infinity values - -QuestDB does not support `NULL` values in arrays. In a scalar `DOUBLE` column, -if the value is `NaN`, it is treated as `NULL` in calculations. However, when it -appears inside an array, it is treated as such, and not `NULL`. If it appears in -an array and you take it out using the array access expression, the resulting -scalar value will again be treated as `NULL`, however whole-array operations -like `array_sum` will treat it according to its floating-point number semantics. - -QuestDB currently has an inconsistency in treating floating-point infinities. -They are sometimes treated as `NULL`, and sometimes not. Infinity values -currently produce unspecified behavior as scalar `DOUBLE` values, but inside an -array, while performing whole-array operations, they are consistently treated as -infinity. - ## The ARRAY literal You can create an array from scalar values using the `ARRAY[...]` syntax, as diff --git a/documentation/concept/replication.md b/documentation/concept/replication.md index 92bbcebd..820f2034 100644 --- a/documentation/concept/replication.md +++ b/documentation/concept/replication.md @@ -87,8 +87,8 @@ timeline Currently : AWS S3 : Azure Blob Store : NFS Filesystem + : Google Cloud Storage Next-up : HDFS - Later on : Google Cloud Storage ``` Something missing? Want to see it sooner? [Contact us](/enterprise/contact)! @@ -109,6 +109,12 @@ is: An example of a replication object store configuration using NFS is: `replication.object.store=fs::root=/mnt/nfs_replication/final;atomic_write_dir=/mnt/nfs_replication/scratch;` +An example of a replication object store configuration using GCS is: + +`replication.object.store=gcs::bucket=;root=/;credential=;` + +For `GCS`, you can also use `credential_path` to set a key-file location. + See the [Replication setup guide](/docs/operations/replication) for direct examples. diff --git a/documentation/operations/replication.md b/documentation/operations/replication.md index 9f2d3289..4d1221fe 100644 --- a/documentation/operations/replication.md +++ b/documentation/operations/replication.md @@ -78,7 +78,7 @@ cost-effective WAL file storage. For further information, see the section. After that, -[create a Blob Container ](https://learn.microsoft.com/en-us/azure/storage/blobs/quickstart-storage-explorer)to +[create a Blob Container](https://learn.microsoft.com/en-us/azure/storage/blobs/quickstart-storage-explorer) to be the root of your replicated data blobs. It will will soon be referenced in the `BLOB_CONTAINER` variable. @@ -133,8 +133,8 @@ For appropriate balance, be sure to: - Disable blob versioning Finally, -[set up bucket lifecycle configuration policy ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html)to -clean up WAL files after a period of time. There are considerations to ensure +[set up bucket lifecycle configuration policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html) +to clean up WAL files after a period of time. There are considerations to ensure that the storage of the WAL files remains cost-effective. For deeper background, see the [object storage expiration policy](/docs/operations/replication/#snapshot-schedule-and-object-store-expiration-policy) @@ -201,6 +201,38 @@ With your values, skip to the [Setup database replication](/docs/operations/replication/#setup-database-replication) section. +### Google GCP GCS + +First, create a new Google Cloud Storage (GCS) bucket, most likely with +`Public access: Not public`. + +Then create a new service account, and give it read-write permissions for the +bucket. The simplest role is `Storage Admin`, but you may set up more granular +permissions as needed. + +Create a new private key for this user and download it in `JSON` format. Then +encode this key as `Base64`. + +If you are on Linux, you can `cat` the file and pass it to `base64`: + +``` +cat .json | base64 +``` + +Then construct the connection string: + +``` +replication.object.store=gcs::bucket=;root=/;credential=; +``` + +If you do not want to put the credentials directly in the connection string, you +can swap the `credential` key for `credential_path`, and give it a path to the +key-file. + +With your values, continue to the +[Setup database replication](/docs/operations/replication/#setup-database-replication) +section. + ## Setup database replication Set the following changes in their respective `server.conf` files: @@ -340,36 +372,42 @@ In general, we can group them into a small matrix: | primary | restart primary | promote replica, create new replica | | replica | restart replica | destroy and recreate replica | -To successfully recover from serious failures, we strongly advise that operators: +To successfully recover from serious failures, we strongly advise that operators: -* Follow best practices -* **Regularly back up data** +- Follow best practices +- **Regularly back up data** ### Network partitions -Temporary network partitions introduce delays between when data is written to the primary, and when it becomes available -for read in the replica. A temporary network partition is not necessarily a problem. +Temporary network partitions introduce delays between when data is written to +the primary, and when it becomes available for read in the replica. A temporary +network partition is not necessarily a problem. -For example, data can be ingested into the primary when the object-store is not available. In this case, the replicas will contain stale data, -and then catch-up when the primary reconnects and successfully uploads to the object store. +For example, data can be ingested into the primary when the object-store is not +available. In this case, the replicas will contain stale data, and then catch-up +when the primary reconnects and successfully uploads to the object store. -Permanent network partitions are not recoverable, and the [emergency primary migration](#emergency-primary-migration) flow should be followed. +Permanent network partitions are not recoverable, and the +[emergency primary migration](#emergency-primary-migration) flow should be +followed. ### Instance crashes -An instance crash may be recoverable or unrecoverable, depending on the specific cause of the crash. -If the instance crashes during ingestion, then it is possible for transactions to be corrupted. -This will lead to a table suspension on restart. +An instance crash may be recoverable or unrecoverable, depending on the specific +cause of the crash. If the instance crashes during ingestion, then it is +possible for transactions to be corrupted. This will lead to a table suspension +on restart. To recover in this case, you can skip the transaction, and reload any missing data. -In the event that the corruption is severe, or confidence in the underlying instance is removed, you should follow the +In the event that the corruption is severe, or confidence in the underlying +instance is removed, you should follow the [emergency primary migration](#emergency-primary-migration) flow. ### Disk or block storage failure -Disk failures can present in several forms, which may be difficult to detect. +Disk failures can present in several forms, which may be difficult to detect. Look for the following symptoms: @@ -383,20 +421,24 @@ Look for the following symptoms: - This is usually caused by writes to disk partially or completely failing - This can also be caused by running out of disk space -As with an instance crash, the consequences can be far-reaching and not immediately clear in all cases. - -To migrate to a new disk, follow the [emergency primary migration](#emergency-primary-migration) flow. When you create a new replica, you -can populate it with the latest snapshot you have taken, and then recover the rest using replicated WALs in the object -store. +As with an instance crash, the consequences can be far-reaching and not +immediately clear in all cases. +To migrate to a new disk, follow the +[emergency primary migration](#emergency-primary-migration) flow. When you +create a new replica, you can populate it with the latest snapshot you have +taken, and then recover the rest using replicated WALs in the object store. ### Flows #### Planned primary migration -Use this flow when you want to change your primary to another instance, but the primary has not failed. +Use this flow when you want to change your primary to another instance, but the +primary has not failed. -The database can be started in a mode which disallows further ingestion, but allows replication. With this method, you can ensure that all outstanding data has been replicated before you start ingesting into a new primary instance. +The database can be started in a mode which disallows further ingestion, but +allows replication. With this method, you can ensure that all outstanding data +has been replicated before you start ingesting into a new primary instance. - Ensure primary instance is still capable of replicating data to the object store - Stop primary instance @@ -404,7 +446,6 @@ The database can be started in a mode which disallows further ingestion, but all - Wait for the instance to complete its uploads and exit with `code 0` - Then follow the [emergency primary migration](#emergency-primary-migration) flow - #### Emergency primary migration Use this flow when you wish to discard a failed primary instance and move to a new one. @@ -413,50 +454,66 @@ Use this flow when you wish to discard a failed primary instance and move to a n - Stop the replica instance - Set `replication.role=primary` on the replica - Ensure other primary-related settings are configured appropriately - - for example, snapshotting policies -- Create an empty `_migrate_primary` file in your database installation directory (i.e. the parent of `conf` and `db`) + - for example, snapshotting policies +- Create an empty `_migrate_primary` file in your database installation + directory (i.e. the parent of `conf` and `db`) - Start the replica instance, which is now the new primary - Create a new replica instance to replace the promoted replica :::warning -Any data committed to the primary, but not yet replicated, will be lost. If the primary has not -completely failed, you can follow the [planned primary migration](#planned-primary-migration) flow -to ensure that all remaining data has been replicated before switching primary. +Any data committed to the primary, but not yet replicated, will be lost. If the +primary has not completely failed, you can follow the +[planned primary migration](#planned-primary-migration) flow to ensure that all +remaining data has been replicated before switching primary. ::: #### When could migration fail? -Two primaries started within the same `replication.primary.keepalive.interval=10s` may still break. +Two primaries started within the same +`replication.primary.keepalive.interval=10s` may still break. -It is important not to migrate the primary without stopping the first primary, if it is still within this interval. +It is important not to migrate the primary without stopping the first primary, +if it is still within this interval. This config can be set in the range of 1 to 300 seconds. #### Point-in-time recovery -Create a QuestDB instance matching a specific historical point in time. +Create a QuestDB instance matching a specific historical point in time. -This is builds a new instance based on a recently recovered snapshot and WAL data in the object store. +This is builds a new instance based on a recently recovered snapshot and WAL +data in the object store. -It can also be used if you wish to remove the latest transactions from the database, or if you encounter corrupted -transactions (though replicating a corrupt transaction has never been observed). +It can also be used if you wish to remove the latest transactions from the +database, or if you encounter corrupted transactions (though replicating a +corrupt transaction has never been observed). **Flow** -- (Recommended) Locate a recent primary instance snapshot that predates your intended recovery timestamp. - - A snapshot taken from **after** your intended recovery timestamp will not work. -- Create the new primary instance, ideally from a snapshot, and ensure it is not running. +- (Recommended) Locate a recent primary instance snapshot that predates your + intended recovery timestamp. + - A snapshot taken from **after** your intended recovery timestamp will not work. +- Create the new primary instance, ideally from a snapshot, and ensure it is not + running. - Touch a `_recover_point_in_time` file. -- Inside this file, add a `replication.object.store` setting pointing to the object store you wish to load transactions from. -- Also add a `replication.recovery.timestamp` setting with the UTC time to which you would like to recover. - - The format is `YYYY-MM-DDThh:mm:ss.mmmZ`. -- (Optional) Configure replication settings in `server.conf` pointing at a **new** object store location. - - You can either configure this instance as a standalone (non-replicated) instance, or - - Configure it as a new primary by setting `replication.role=primary`. In this case, the `replication.object.store` **must** point to a fresh, empty location. -- If you have created the new primary using a snapshot, touch a `_restore` file to trigger the snapshot recovery process. - - More details can be found in the [backup and restore](/documentation/operations/backup.md) documentation. +- Inside this file, add a `replication.object.store` setting pointing to the + object store you wish to load transactions from. +- Also add a `replication.recovery.timestamp` setting with the UTC time to which + you would like to recover. + - The format is `YYYY-MM-DDThh:mm:ss.mmmZ`. +- (Optional) Configure replication settings in `server.conf` pointing at a + **new** object store location. + - You can either configure this instance as a standalone (non-replicated) + instance, or + - Configure it as a new primary by setting `replication.role=primary`. In this + case, the `replication.object.store` **must** point to a fresh, empty + location. +- If you have created the new primary using a snapshot, touch a `_restore` file + to trigger the snapshot recovery process. + - More details can be found in the + [backup and restore](/documentation/operations/backup.md) documentation. - Start new primary instance. ## Multi-primary ingestion @@ -464,5 +521,5 @@ transactions (though replicating a corrupt transaction has never been observed). [QuestDB Enterprise](/enterprise/) supports multi-primary ingestion, where multiple primaries can write to the same database. -See the [Multi-primary ingestion](/docs/operations/multi-primary-ingestion/) page for -more information. +See the [Multi-primary ingestion](/docs/operations/multi-primary-ingestion/) +page for more information. diff --git a/documentation/reference/function/array.md b/documentation/reference/function/array.md index 04812293..29457804 100644 --- a/documentation/reference/function/array.md +++ b/documentation/reference/function/array.md @@ -11,7 +11,8 @@ not they can take an array parameter. ## array_avg -`array_avg(array)` returns the average of all the array elements. +`array_avg(array)` returns the average of all the array elements. `NULL` elements +don't contribute to either count or sum. #### Parameter @@ -29,8 +30,8 @@ SELECT array_avg(ARRAY[ [1.0, 1.0], [2.0, 2.0] ]); ## array_count -`array_count(array)` returns the number of finite elements in the array. The -`NaN` and infinity values are not included in the count. +`array_count(array)` returns the number of finite elements in the array. `NULL` +elements do not contribute to the count. #### Parameter @@ -52,7 +53,8 @@ SELECT `array_cum_sum(array)` returns a 1D array of the cumulative sums over the array, traversing it in row-major order. The input array can have any dimensionality. -The returned 1D array has the same number of elements as the input array. +The returned 1D array has the same number of elements as the input array. `NULL` +elements behave as if they were zero. #### Parameter @@ -71,7 +73,8 @@ SELECT array_cum_sum(ARRAY[ [1.0, 1.0], [2.0, 2.0] ]); ## array_position `array_position(array, elem)` returns the position of `elem` inside the 1D `array`. If -`elem` doesn't appear in `array`, it returns `NULL`. +`elem` doesn't appear in `array`, it returns `NULL`. If `elem` is `NULL`, it returns the +position of the first `NULL` element, if any. #### Parameters @@ -92,7 +95,8 @@ SELECT ## array_sum -`array_sum(array)` returns the sum of all the array elements. +`array_sum(array)` returns the sum of all the array elements. `NULL` elements +behave as if they were zero. #### Parameter @@ -182,8 +186,9 @@ array can be sorted ascending or descending, and the function auto-detects this. :::warning -The array must be sorted, but this function doesn't enforce it. It runs a binary -search for the value, and the behavior with an unsorted array is unspecified. +The array must be sorted, and must not contain `NULL`s, but this function +doesn't enforce it. It runs a binary search for the value, and the behavior with +an unsorted array is unspecified. ::: @@ -221,9 +226,14 @@ and the second one "column". The resulting matrix has the same number of rows as `left_matrix` and the same number of columns as `right_matrix`. The value at every (row, column) position in the result is equal to the sum of products of matching elements in the -corresponding row of `left_matrix` and column of `right_matrix`: +corresponding row of `left_matrix` and column of `right_matrix`. In a formula, +with C = A x B: -`result[row, col] := sum_over_i(left_matrix[row, i] * right_matrix[i, col])` +$$ + +C_{jk} = \sum_{i=1}^{n} A_{ji} B_{ik} + +$$ #### Parameters @@ -268,8 +278,8 @@ SELECT matmul(ARRAY[[1, 2], [3, 4]], ARRAY[[2, 3], [2, 3]]); (deepest) dimension by `distance`. The distance can be positive (right shift) or negative (left shift). More formally, it moves elements from position `i` to `i + distance`, dropping elements whose resulting position is outside the array. -It fills the holes created by shifting with `fill_value`, whose default for a -`DOUBLE` array is `NaN`. +It fills the holes created by shifting with `fill_value`, the default being +`NULL`. #### Parameters @@ -285,7 +295,7 @@ SELECT shift(ARRAY[ [1.0, 2.0], [3.0, 4.0] ], 1); | shift | | -------------------------- | -| ARRAY[[NaN,1.0],[NaN,3.0]] | +| ARRAY[[null,1.0],[null,3.0]] | ```questdb-sql SELECT shift(ARRAY[ [1.0, 2.0], [3.0, 4.0] ], -1); @@ -293,7 +303,7 @@ SELECT shift(ARRAY[ [1.0, 2.0], [3.0, 4.0] ], -1); | shift | | -------------------------- | -| ARRAY[[2.0,NaN],[4.0,NaN]] | +| ARRAY[[2.0,null],[4.0,null]] | ```questdb-sql SELECT shift(ARRAY[ [1.0, 2.0], [3.0, 4.0] ], -1, 10.0); diff --git a/documentation/reference/function/row-generator.md b/documentation/reference/function/row-generator.md index fdf43edd..b8c21a3a 100644 --- a/documentation/reference/function/row-generator.md +++ b/documentation/reference/function/row-generator.md @@ -4,55 +4,132 @@ sidebar_label: Row generator description: Row generator function reference documentation. --- -The `long_sequence()` function may be used as a row generator to create table -data for testing. Basic usage of this function involves providing the number of -iterations required. Deterministic pseudo-random behavior can be achieved by -providing seed values when calling the function. +## generate_series -This function is commonly used in combination with -[random generator functions](/docs/reference/function/random-value-generator/) -to produce mock data. +Use `generate_series` to generate a pseudo-table with an arithmetic series in a +single column. You can call it in isolation (`generate_series(...)`), or as part of +a SELECT statement (`SELECT * FROM generate_series(...)`). -## long_sequence +This function can generate a `LONG` or `DOUBLE` series. There is also a +[variant](/docs/reference/function/timestamp-generator#generate_series) +that generates a `TIMESTAMP` series. + +The `start` and `end` values are interchangeable, and you can use a negative +`step` value to obtain a descending arithmetic series. + +The series is inclusive on both ends. -- `long_sequence(iterations)` - generates rows -- `long_sequence(iterations, seed1, seed2)` - generates rows deterministically +The step argument is optional, and defaults to 1. **Arguments:** --`iterations`: is a `long` representing the number of rows to generate. -`seed1` -and `seed2` are `long64` representing both parts of a `long128` seed. +`generate_series(start_long, end_long, step_long)` - generates a series of +longs. -### Row generation +`generate_series(start_double, end_double, step_double)` - generates a series of +doubles. -The `long_sequence()` function can be used to generate very large datasets for -testing e.g. billions of rows. +**Return value:** -`long_sequence(iterations)` is used to: +The column type of the pseudo-table is either `LONG` or `DOUBLE`, according to +the type of the arguments. -- Generate a number of rows defined by `iterations`. -- Generate a column `x:long` of monotonically increasing long integers starting - from 1, which can be accessed for queries. +**Examples:** -### Random number seed +```questdb-sql title="Ascending LONG series" demo +generate_series(-3, 3, 1); +-- or +generate_series(-3, 3); +``` -When `long_sequence` is used conjointly with -[random generators](/docs/reference/function/random-value-generator/), these -values are usually generated at random. The function supports a seed to be -passed in order to produce deterministic results. +| generate_series | +| --------------- | +| -3 | +| -2 | +| -1 | +| 0 | +| 1 | +| 2 | +| 3 | + +```questdb-sql title="Descending LONG series" demo +generate_series(3, -3, -1); +``` -:::note +| generate_series | +| --------------- | +| 3 | +| 2 | +| 1 | +| 0 | +| -1 | +| -2 | +| -3 | + +```questdb-sql title="Ascending DOUBLE series" demo +generate_series(-3d, 3d, 1d); +-- or +generate_series(-3d, 3d); +``` -Deterministic procedural generation makes it easy to test on vasts amounts of -data without actually moving large files around across machines. Using the same -seed on any machine at any time will consistently produce the same results for -all random functions. +| generate_series | +| --------------- | +| -3.0 | +| -2.0 | +| -1.0 | +| 0.0 | +| 1.0 | +| 2.0 | +| 3.0 | + +```questdb-sql title="Descending DOUBLE series" demo +generate_series(-3d, 3d, -1d); +``` + +| generate_series | +| --------------- | +| 3.0 | +| 2.0 | +| 1.0 | +| 0.0 | +| -1.0 | +| -2.0 | +| -3.0 | + +## long_sequence + +Use `long_sequence()` as a row generator to create table data for testing. The +function deals with two concerns: + +- generates a pseudo-table with an ascending series of LONG numbers starting at + 1 +- serves as the provider of pseudo-randomness to all the + [random value functions](/docs/reference/function/random-value-generator/) + +Basic usage of this function involves providing the number of rows to generate. +You can achieve deterministic pseudo-random behavior by providing the random +seed values. + +- `long_sequence(num_rows)` — generates rows with a random seed +- `long_sequence(num_rows, seed1, seed2)` — generates rows deterministically + +:::tip + +Deterministic procedural generation makes it easy to test on vast amounts of +data without moving large files across machines. Using the same seed on any +machine at any time will consistently produce the same results for all random +functions. ::: +**Arguments:** + +- `num_rows` — `long` representing the number of rows to generate +- `seed1` and `seed2` — `long` numbers that combine into a `long128` seed + **Examples:** -```questdb-sql title="Generating multiple rows" +```questdb-sql title="Generate multiple rows" SELECT x, rnd_double() FROM long_sequence(5); ``` @@ -65,7 +142,7 @@ FROM long_sequence(5); | 4 | 0.9130602021 | | 5 | 0.718276777 | -```questdb-sql title="Accessing row_number using the x column" +```questdb-sql title="Access row_number using the x column" SELECT x, x*x FROM long_sequence(5); ``` @@ -78,7 +155,7 @@ FROM long_sequence(5); | 4 | 16 | | 5 | 25 | -```questdb-sql title="Using with a seed" +```questdb-sql title="Use with a fixed random seed" SELECT rnd_double() FROM long_sequence(2,128349234,4327897); ``` @@ -86,7 +163,7 @@ FROM long_sequence(2,128349234,4327897); :::note The results below will be the same on any machine at any time as long as they -use the same seed in long_sequence. +use the same seed in `long_sequence`. ::: diff --git a/documentation/reference/function/timestamp-generator.md b/documentation/reference/function/timestamp-generator.md index abb1dd6c..22d3ccce 100644 --- a/documentation/reference/function/timestamp-generator.md +++ b/documentation/reference/function/timestamp-generator.md @@ -4,34 +4,33 @@ sidebar_label: Timestamp generator description: Timestamp generator function reference documentation. --- -The `timestamp_sequence()` function may be used as a timestamp generator to -create data for testing. Pseudo-random steps can be achieved by providing a -[random function](/docs/reference/function/random-value-generator/) to the -`step` argument. A `seed` value may be provided to a random function if the -randomly-generated `step` should be deterministic. - ## timestamp_sequence +This function acts similarly to +[`rnd_*`](/docs/reference/function/random-value-generator/) functions. It +generates a single timestamp value (not a pseudo-table), but when used in +combination with the `long_sequence()` pseudo-table function, its output forms a +series of timestamps that monotonically increase. + - `timestamp_sequence(startTimestamp, step)` generates a sequence of `timestamp` starting at `startTimestamp`, and incrementing by a `step` set as a `long` - value in microseconds. This `step` can be either; + value in microseconds. The `step` can be either; - - a static value, in which case the growth will be monotonic, or - - - a randomized value, in which case the growth will be randomized. This is - done using - [random value generator functions](/docs/reference/function/random-value-generator/). + - a fixed value, resulting in a steadily-growing timestamp series + - a random function invocation, such as + [rnd_short()](/docs/reference/function/random-value-generator#rnd_short), + resulting in a timestamp series that grows in random steps **Arguments:** -- `startTimestamp`: is a `timestamp` representing the starting (i.e lowest) - generated timestamp in the sequence. -- `step`: is a `long` representing the interval between 2 consecutive generated - timestamps in `microseconds`. +- `startTimestamp` — the starting (i.e lowest) generated timestamp in the + sequence +- `step` — the interval (in microseconds) between 2 consecutive generated + timestamps **Return value:** -Return value type is `timestamp`. +The type of the return value is `TIMESTAMP`. **Examples:** @@ -64,3 +63,97 @@ FROM long_sequence(5); | 3 | 2019-10-17T00:00:00.600000Z | | 4 | 2019-10-17T00:00:00.900000Z | | 5 | 2019-10-17T00:00:01.300000Z | + +## generate_series + +This function generates a pseudo-table containing an arithmetic series of +timestamps. Use it when you don't need a given number of rows, but a given time +period defined by start, and, and step. + +You can call it in isolation (`generate_series(...)`), or as part of a SELECT +statement (`SELECT * FROM generate_series(...)`). + +Provide the time step either in microseconds, or in a period string, similar to +`SAMPLE BY`. + +The `start` and `end` values are interchangeable; use a negative time step value +to obtain the series in reverse order. + +The series is inclusive on both ends. + +**Arguments:** + +There are two timestamp-generating variants of `generate_series`: + +- `generate_series(start, end, step_period)` - generate a series of timestamps + between `start` and `end`, in periodic steps +- `generate_series(start, end, step_micros)` - generates a series of timestamps + between `start` and `end`, in microsecond steps + +**Return value:** + +The type of the return value is `TIMESTAMP`. + +**Examples:** + +```questdb-sql title="Ascending series using a period" demo +generate_series('2025-01-01', '2025-02-01', '5d'); +``` + +| generate_series | +| --------------------------- | +| 2025-01-01T00:00:00.000000Z | +| 2025-01-06T00:00:00.000000Z | +| 2025-01-11T00:00:00.000000Z | +| 2025-01-16T00:00:00.000000Z | +| 2025-01-21T00:00:00.000000Z | +| 2025-01-26T00:00:00.000000Z | +| 2025-01-31T00:00:00.000000Z | + +```questdb-sql title="Descending series using a period" demo +generate_series('2025-01-01', '2025-02-01', '-5d'); +``` + +| generate_series | +| --------------------------- | +| 2025-02-01T00:00:00.000000Z | +| 2025-01-27T00:00:00.000000Z | +| 2025-01-22T00:00:00.000000Z | +| 2025-01-17T00:00:00.000000Z | +| 2025-01-12T00:00:00.000000Z | +| 2025-01-07T00:00:00.000000Z | +| 2025-01-02T00:00:00.000000Z | + +```questdb-sql title="Ascending series using microseconds" demo +generate_series( + '2025-01-01T00:00:00Z'::timestamp, + '2025-01-01T00:05:00Z'::timestamp, + 60_000_000 +); +``` + +| generate_series | +| --------------------------- | +| 2025-01-01T00:00:00.000000Z | +| 2025-01-01T00:01:00.000000Z | +| 2025-01-01T00:02:00.000000Z | +| 2025-01-01T00:03:00.000000Z | +| 2025-01-01T00:04:00.000000Z | +| 2025-01-01T00:05:00.000000Z | + +```questdb-sql title="Descending series using microseconds" demo +generate_series( + '2025-01-01T00:00:00Z'::timestamp, + '2025-01-01T00:05:00Z'::timestamp, + -60_000_000 +); +``` + +| generate_series | +| --------------------------- | +| 2025-01-01T00:05:00.000000Z | +| 2025-01-01T00:04:00.000000Z | +| 2025-01-01T00:03:00.000000Z | +| 2025-01-01T00:02:00.000000Z | +| 2025-01-01T00:01:00.000000Z | +| 2025-01-01T00:00:00.000000Z | diff --git a/documentation/reference/sql/datatypes.md b/documentation/reference/sql/datatypes.md index 2953efbe..204d7fe6 100644 --- a/documentation/reference/sql/datatypes.md +++ b/documentation/reference/sql/datatypes.md @@ -73,8 +73,8 @@ Many nullable types reserve a value that marks them `NULL`: | Type Name | Null value | Description | | ---------------- | -------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | -| `float` | `NaN` | As defined by IEEE 754 (`java.lang.Float.NaN`). | -| `double` | `NaN` | As defined by IEEE 754 (`java.lang.Double.NaN`). | +| `float` | `NaN`, `+Infinity`, `-Infinity` | As defined by IEEE 754 (`java.lang.Float.NaN` etc.) | +| `double` | `NaN`, `+Infinity`, `-Infinity` | As defined by IEEE 754 (`java.lang.Double.NaN`, etc.) | | `long256` | `0x8000000000000000800000000000000080000000000000008000000000000000` | The value equals four consecutive `long` null literals. | | `long` | `0x8000000000000000L` | Minimum possible value a `long` can take, -2^63. | | `date` | `0x8000000000000000L` | Minimum possible value a `long` can take, -2^63. |