Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 0 additions & 9 deletions dashboard-elements/create.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -156,15 +156,6 @@ _Array Fields_
- `exists`
- `not-exists`

#### Special fields

Axiom creates the following two fields automatically for a new dataset:

- `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
- `_sysTime` is the time when you ingested the data.

In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.

### Group by (segmentation)

When visualizing data, it can be useful to segment data into specific groups to more clearly understand how the data behaves.
Expand Down
9 changes: 0 additions & 9 deletions query-data/explore.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -118,15 +118,6 @@ To select the time range, choose one of the following options:
- Use the **Quick range** items to quickly select popular time ranges.
- Use the **Custom start/end date** fields to select specific times.

### Special fields

Axiom creates the following two fields automatically for a new dataset:

- `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
- `_sysTime` is the time when you ingested the data.

In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.

## Create query using APL

APL is a data processing language that supports filtering, extending, and summarizing data. For more information, see [Introduction to APL](/apl/introduction).
Expand Down
9 changes: 2 additions & 7 deletions reference/datasets.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -62,14 +62,9 @@ Don’t create multiple Axiom organizations to separate your data. For example,

<AccessToDatasets />

## Special fields
## Limits on ingested data

Axiom creates the following two fields automatically for a new dataset:

- `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events.
- `_sysTime` is the time when you ingested the data.

In most cases, you can use `_time` and `_sysTime` interchangeably. The difference between them can be useful if you experience clock skews on your event-producing systems.
For more information on limits and requirements imposed by Axiom, see [Limits](/reference/field-restrictions).

## Create dataset

Expand Down
48 changes: 44 additions & 4 deletions reference/field-restrictions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ sidebarTitle: Limits
keywords: ['axiom documentation', 'documentation', 'axiom', 'reference', 'settings', 'field restrictions', 'time stamp', 'time stamp field', 'limits', 'requirements', 'pricing', 'usage']
---

import IngestDataLimits from "/snippets/ingest-data-limits.mdx"

Axiom applies certain limits and requirements to guarantee good service across the platform. Some of these limits depend on your pricing plan, and some of them are applied system-wide. This reference article explains all limits and requirements applied by Axiom.

Limits are necessary to prevent potential issues that could arise from the ingestion of excessively large events or data structures that are too complex. Limits help maintain system performance, allow for effective data processing, and manage resources effectively.
Expand Down Expand Up @@ -45,15 +43,57 @@ For more information on how to save on data loading, data retention, and queryin

Axiom restricts the number of datasets and the number of fields in your datasets. The number of datasets and fields you can use is based on your pricing plan and explained in the table above.

If you ingest a new event that would exceed the allowed number of fields in a dataset, Axiom returns an error and rejects the event. To prevent this error, ensure that the number of fields in your events are within the allowed limits. To reduce the number of fields in a dataset, [trim the dataset](/reference/datasets#trim-dataset) and [vacuum its fields](/reference/datasets#vacuum-fields).
If you ingest a new event that would exceed the allowed number of fields in a dataset, Axiom returns an error and rejects the event. To prevent this error, ensure that the number of fields in your events are within the allowed limits.

To reduce the number of fields in a dataset, use one of the following approaches:
- [Trim the dataset](/reference/datasets#trim-dataset) and [vacuum its fields](/reference/datasets#vacuum-fields).
- Use [map fields](/apl/data-types/map-fields).

## System-wide limits

The following limits are applied to all accounts, irrespective of the pricing plan.

### Limits on ingested data

<IngestDataLimits />
The table below summarizes the limits Axiom applies to each data ingest. These limits are independent of your pricing plan.

| | Limit |
| ---------------------------- | --------- |
| Maximum field size | 1 MB |
| Maximum events in a batch | 10,000 |
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we have a limit on the number of events in a batch?

Maybe also list maximum field size of 1 MB? It feels a bit tautological (because the event itself cannot be bigger), but it makes sense to mention in the case where we replace strings longer than this with <invalid string: too long>

| Maximum field name length | 200 bytes |

If you try to ingest data that exceeds these limits, Axiom does the following:
- Replaces strings that are too long with `<invalid string: too long>`.
- Replaces binary with `<invalid data>`.
- Truncates maps and slices that nest deeper than 100 levels and replaces them with `nil` at the cut-off level.
- Converts the following float values to `nil`:
- NaN
- +Infty
- -Infty

### Special fields

Axiom creates the following two fields automatically for a new dataset:

- `_time` is the timestamp of the event. If the data you ingest doesn’t have a `_time` field, Axiom assigns the time of the data ingest to the events. If you ingest data using the [Ingest data](/restapi/endpoints/ingestIntoDataset) API endpoint, you can specify the timestamp field with the [timestamp-field](/restapi/endpoints/ingestIntoDataset#parameter-timestamp-field) parameter.
- `_sysTime` is the time when you ingested the data.

In most cases, use `_time` to define the timestamp of events. In rare cases, if you experience clock skews on your event-producing systems, `_sysTime` can be useful.

### Reserved field names

Axiom reserves the following field names for internal use:

- `_blockInfo`
- `_cursor`
- `_rowID`
- `_source`
- `_sysTime`

Don’t ingest data that contains these fields names. If you try to ingest a field with a reserved name, Axiom renames the ingested field to `_user_FIELDNAME`. For example, if you try to ingest the field `_sysTime`, Axiom renames it to `_user_sysTime`.

In general, avoid ingesting field names that start with `_`.

### Requirements for timestamp field

Expand Down
4 changes: 1 addition & 3 deletions restapi/api-limits.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ description: 'Learn how to limit the number of calls a user can make over a cert
keywords: ['axiom documentation', 'documentation', 'axiom', 'axiom api', 'rest api', 'rest', 'rate limits', 'user', 'organization', 'query limits', 'ingest limits', 'requests', 'reset', 'scope', 'message']
---

import IngestDataLimits from "/snippets/ingest-data-limits.mdx"

Axiom limits the number of calls a user (and their organization) can make over a certain period
of time to ensure fair usage and to maintain the quality of service for everyone.
Axiom systems closely monitor API usage and if a user exceeds any thresholds, Axiom
Expand Down Expand Up @@ -62,4 +60,4 @@ which can efficiently manage the number of requests by aggregating data before s

## Limits on ingested data

<IngestDataLimits />
For more information on limits and requirements imposed by Axiom, see [Limits](/reference/field-restrictions).
7 changes: 2 additions & 5 deletions send-data/methods.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ keywords: ["send data", "ingest", "methods", "integrations", "opentelemetry", "v

import ReplaceDomain from "/snippets/replace-domain.mdx"
import ReplaceDatasetToken from "/snippets/replace-dataset-token.mdx"
import IngestDataLimits from "/snippets/ingest-data-limits.mdx"

The easiest way to send your first event data to Axiom is with a direct HTTP request using a tool like `cURL`.

Expand Down Expand Up @@ -106,8 +105,6 @@ The following examples show how to send data using OpenTelemetry from various la
If you need an ingestion method that isn’t in the list above, [contact Axiom](https://www.axiom.co/contact).
</Info>

### Limits on ingested data
## Limits on ingested data

<IngestDataLimits />

For more information about limits and requirements, see [Limits](/reference/field-restrictions).
For more information on limits and requirements imposed by Axiom, see [Limits](/reference/field-restrictions).
7 changes: 0 additions & 7 deletions snippets/ingest-data-limits.mdx

This file was deleted.