+This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
-This repository contains source and test code for **Vectorize** clients in different languages.
+- API version: 0.0.1
+- Package version: 1.0.0
+- Generator version: 7.14.0
+- Build package: org.openapitools.codegen.languages.PythonClientCodegen
+For more information, please visit [https://vectorize.io](https://vectorize.io)
-The clients are generated automatically using OpenAPI generator, starting from the OpenAPI specification in the `vectorize_api.json` file that is downloaded from the [Vectorize Platform OpenAPI endpoint](https://platform.vectorize.io/api/openapi).
+## Requirements.
+Python 3.9+
-## How to
-- Python
- - [Getting started](./src/python/README.md)
- - [Official documentation](https://docs.vectorize.io/api/api-getting-started)
- - [Code Reference](https://vectorize-io.github.io/vectorize-clients/python/vectorize_client/api.html)
-- TypeScript
- - [Getting started](./src/ts/README.md)
- - [Official documentation](https://docs.vectorize.io/api/api-getting-started)
- - [Code Reference](https://vectorize-io.github.io/vectorize-clients/ts/)
+## Installation & Usage
+### pip install
+If the python package is hosted on a repository, you can install directly using:
+```sh
+pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git
+```
+(you may need to run `pip` with root permission: `sudo pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git`)
+
+Then import the package:
+```python
+import vectorize_client
+```
+
+### Setuptools
-## Generate and release clients
-To generate a client, run the following command:
+Install via [Setuptools](http://pypi.python.org/pypi/setuptools).
-```bash
-npm install
+```sh
+python setup.py install --user
+```
+(or `sudo python setup.py install` to install the package for all users)
-npm run generate:ts
-npm run generate:python
+Then import the package:
+```python
+import vectorize_client
```
-To release a client, run the following command:
+### Tests
+
+Execute `pytest` to run the tests.
+
+## Getting Started
+
+Please follow the [installation procedure](#installation--usage) and then run the following:
+
+```python
+
+import vectorize_client
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsAIPlatformsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ create_ai_platform_connector_request_inner = [{"name":"My CreateAIPlatformConnectorRequest","type":"BEDROCK","config":{"name":"My BEDROCKAuthConfig","access-key":"AKIAIOSFODNN7EXAMPLE","key":"key_example_123456","region":"us-east-1"}}] # List[CreateAIPlatformConnectorRequestInner] |
-```bash
-npm install
+ try:
+ # Create a new AI platform connector
+ api_response = api_instance.create_ai_platform_connector(organization_id, create_ai_platform_connector_request_inner)
+ print("The response of ConnectorsAIPlatformsApi->create_ai_platform_connector:\n")
+ pprint(api_response)
+ except ApiException as e:
+ print("Exception when calling ConnectorsAIPlatformsApi->create_ai_platform_connector: %s\n" % e)
-npm run release:ts
-npm run release:python
```
+## Documentation for API Endpoints
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Class | Method | HTTP request | Description
+------------ | ------------- | ------------- | -------------
+*ConnectorsAIPlatformsApi* | [**create_ai_platform_connector**](docs/ConnectorsAIPlatformsApi.md#create_ai_platform_connector) | **POST** /org/{organizationId}/connectors/aiplatforms | Create a new AI platform connector
+*ConnectorsAIPlatformsApi* | [**delete_ai_platform**](docs/ConnectorsAIPlatformsApi.md#delete_ai_platform) | **DELETE** /org/{organizationId}/connectors/aiplatforms/{aiplatformId} | Delete an AI platform connector
+*ConnectorsAIPlatformsApi* | [**get_ai_platform_connector**](docs/ConnectorsAIPlatformsApi.md#get_ai_platform_connector) | **GET** /org/{organizationId}/connectors/aiplatforms/{aiplatformId} | Get an AI platform connector
+*ConnectorsAIPlatformsApi* | [**get_ai_platform_connectors**](docs/ConnectorsAIPlatformsApi.md#get_ai_platform_connectors) | **GET** /org/{organizationId}/connectors/aiplatforms | Get all existing AI Platform connectors
+*ConnectorsAIPlatformsApi* | [**update_ai_platform_connector**](docs/ConnectorsAIPlatformsApi.md#update_ai_platform_connector) | **PATCH** /org/{organizationId}/connectors/aiplatforms/{aiplatformId} | Update an AI Platform connector
+*ConnectorsDestinationConnectorsApi* | [**create_destination_connector**](docs/ConnectorsDestinationConnectorsApi.md#create_destination_connector) | **POST** /org/{organizationId}/connectors/destinations | Create a new destination connector
+*ConnectorsDestinationConnectorsApi* | [**delete_destination_connector**](docs/ConnectorsDestinationConnectorsApi.md#delete_destination_connector) | **DELETE** /org/{organizationId}/connectors/destinations/{destinationConnectorId} | Delete a destination connector
+*ConnectorsDestinationConnectorsApi* | [**get_destination_connector**](docs/ConnectorsDestinationConnectorsApi.md#get_destination_connector) | **GET** /org/{organizationId}/connectors/destinations/{destinationConnectorId} | Get a destination connector
+*ConnectorsDestinationConnectorsApi* | [**get_destination_connectors**](docs/ConnectorsDestinationConnectorsApi.md#get_destination_connectors) | **GET** /org/{organizationId}/connectors/destinations | Get all existing destination connectors
+*ConnectorsDestinationConnectorsApi* | [**update_destination_connector**](docs/ConnectorsDestinationConnectorsApi.md#update_destination_connector) | **PATCH** /org/{organizationId}/connectors/destinations/{destinationConnectorId} | Update a destination connector
+*ConnectorsSourceConnectorsApi* | [**add_user_to_source_connector**](docs/ConnectorsSourceConnectorsApi.md#add_user_to_source_connector) | **POST** /org/{organizationId}/connectors/sources/{sourceConnectorId}/users | Add a user to a source connector
+*ConnectorsSourceConnectorsApi* | [**create_source_connector**](docs/ConnectorsSourceConnectorsApi.md#create_source_connector) | **POST** /org/{organizationId}/connectors/sources | Create a new source connector
+*ConnectorsSourceConnectorsApi* | [**delete_source_connector**](docs/ConnectorsSourceConnectorsApi.md#delete_source_connector) | **DELETE** /org/{organizationId}/connectors/sources/{sourceConnectorId} | Delete a source connector
+*ConnectorsSourceConnectorsApi* | [**delete_user_from_source_connector**](docs/ConnectorsSourceConnectorsApi.md#delete_user_from_source_connector) | **DELETE** /org/{organizationId}/connectors/sources/{sourceConnectorId}/users | Delete a source connector user
+*ConnectorsSourceConnectorsApi* | [**get_source_connector**](docs/ConnectorsSourceConnectorsApi.md#get_source_connector) | **GET** /org/{organizationId}/connectors/sources/{sourceConnectorId} | Get a source connector
+*ConnectorsSourceConnectorsApi* | [**get_source_connectors**](docs/ConnectorsSourceConnectorsApi.md#get_source_connectors) | **GET** /org/{organizationId}/connectors/sources | Get all existing source connectors
+*ConnectorsSourceConnectorsApi* | [**update_source_connector**](docs/ConnectorsSourceConnectorsApi.md#update_source_connector) | **PATCH** /org/{organizationId}/connectors/sources/{sourceConnectorId} | Update a source connector
+*ConnectorsSourceConnectorsApi* | [**update_user_in_source_connector**](docs/ConnectorsSourceConnectorsApi.md#update_user_in_source_connector) | **PATCH** /org/{organizationId}/connectors/sources/{sourceConnectorId}/users | Update a source connector user
+*ExtractionApi* | [**get_extraction_result**](docs/ExtractionApi.md#get_extraction_result) | **GET** /org/{organizationId}/extraction/{extractionId} | Get extraction result
+*ExtractionApi* | [**start_extraction**](docs/ExtractionApi.md#start_extraction) | **POST** /org/{organizationId}/extraction | Start content extraction from a file
+*FilesApi* | [**start_file_upload**](docs/FilesApi.md#start_file_upload) | **POST** /org/{organizationId}/files | Upload a generic file to the platform
+*PipelinesApi* | [**create_pipeline**](docs/PipelinesApi.md#create_pipeline) | **POST** /org/{organizationId}/pipelines | Create a new pipeline
+*PipelinesApi* | [**delete_pipeline**](docs/PipelinesApi.md#delete_pipeline) | **DELETE** /org/{organizationId}/pipelines/{pipelineId} | Delete a pipeline
+*PipelinesApi* | [**get_deep_research_result**](docs/PipelinesApi.md#get_deep_research_result) | **GET** /org/{organizationId}/pipelines/{pipelineId}/deep-research/{researchId} | Get deep research result
+*PipelinesApi* | [**get_pipeline**](docs/PipelinesApi.md#get_pipeline) | **GET** /org/{organizationId}/pipelines/{pipelineId} | Get a pipeline
+*PipelinesApi* | [**get_pipeline_events**](docs/PipelinesApi.md#get_pipeline_events) | **GET** /org/{organizationId}/pipelines/{pipelineId}/events | Get pipeline events
+*PipelinesApi* | [**get_pipeline_metrics**](docs/PipelinesApi.md#get_pipeline_metrics) | **GET** /org/{organizationId}/pipelines/{pipelineId}/metrics | Get pipeline metrics
+*PipelinesApi* | [**get_pipelines**](docs/PipelinesApi.md#get_pipelines) | **GET** /org/{organizationId}/pipelines | Get all pipelines
+*PipelinesApi* | [**retrieve_documents**](docs/PipelinesApi.md#retrieve_documents) | **POST** /org/{organizationId}/pipelines/{pipelineId}/retrieval | Retrieve documents from a pipeline
+*PipelinesApi* | [**start_deep_research**](docs/PipelinesApi.md#start_deep_research) | **POST** /org/{organizationId}/pipelines/{pipelineId}/deep-research | Start a deep research
+*PipelinesApi* | [**start_pipeline**](docs/PipelinesApi.md#start_pipeline) | **POST** /org/{organizationId}/pipelines/{pipelineId}/start | Start a pipeline
+*PipelinesApi* | [**stop_pipeline**](docs/PipelinesApi.md#stop_pipeline) | **POST** /org/{organizationId}/pipelines/{pipelineId}/stop | Stop a pipeline
+*UploadsApi* | [**delete_file_from_connector**](docs/UploadsApi.md#delete_file_from_connector) | **DELETE** /org/{organizationId}/uploads/{connectorId}/files | Delete a file from a file upload connector
+*UploadsApi* | [**get_upload_files_from_connector**](docs/UploadsApi.md#get_upload_files_from_connector) | **GET** /org/{organizationId}/uploads/{connectorId}/files | Get uploaded files from a file upload connector
+*UploadsApi* | [**start_file_upload_to_connector**](docs/UploadsApi.md#start_file_upload_to_connector) | **PUT** /org/{organizationId}/uploads/{connectorId}/files | Upload a file to a file upload connector
+
+
+## Documentation For Models
+
+ - [AIPlatform](docs/AIPlatform.md)
+ - [AIPlatformConfigSchema](docs/AIPlatformConfigSchema.md)
+ - [AIPlatformInput](docs/AIPlatformInput.md)
+ - [AIPlatformSchema](docs/AIPlatformSchema.md)
+ - [AIPlatformType](docs/AIPlatformType.md)
+ - [AWSS3AuthConfig](docs/AWSS3AuthConfig.md)
+ - [AWSS3Config](docs/AWSS3Config.md)
+ - [AZUREAISEARCHAuthConfig](docs/AZUREAISEARCHAuthConfig.md)
+ - [AZUREAISEARCHConfig](docs/AZUREAISEARCHConfig.md)
+ - [AZUREBLOBAuthConfig](docs/AZUREBLOBAuthConfig.md)
+ - [AZUREBLOBConfig](docs/AZUREBLOBConfig.md)
+ - [AddUserFromSourceConnectorResponse](docs/AddUserFromSourceConnectorResponse.md)
+ - [AddUserToSourceConnectorRequest](docs/AddUserToSourceConnectorRequest.md)
+ - [AddUserToSourceConnectorRequestSelectedFiles](docs/AddUserToSourceConnectorRequestSelectedFiles.md)
+ - [AddUserToSourceConnectorRequestSelectedFilesAnyOf](docs/AddUserToSourceConnectorRequestSelectedFilesAnyOf.md)
+ - [AddUserToSourceConnectorRequestSelectedFilesAnyOfValue](docs/AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.md)
+ - [AdvancedQuery](docs/AdvancedQuery.md)
+ - [AmazonS3](docs/AmazonS3.md)
+ - [AmazonS31](docs/AmazonS31.md)
+ - [AmazonS32](docs/AmazonS32.md)
+ - [AzureBlobStorage](docs/AzureBlobStorage.md)
+ - [AzureBlobStorage1](docs/AzureBlobStorage1.md)
+ - [AzureBlobStorage2](docs/AzureBlobStorage2.md)
+ - [Azureaisearch](docs/Azureaisearch.md)
+ - [Azureaisearch1](docs/Azureaisearch1.md)
+ - [Azureaisearch2](docs/Azureaisearch2.md)
+ - [BEDROCKAuthConfig](docs/BEDROCKAuthConfig.md)
+ - [Bedrock](docs/Bedrock.md)
+ - [Bedrock1](docs/Bedrock1.md)
+ - [Bedrock2](docs/Bedrock2.md)
+ - [CAPELLAAuthConfig](docs/CAPELLAAuthConfig.md)
+ - [CAPELLAConfig](docs/CAPELLAConfig.md)
+ - [CONFLUENCEAuthConfig](docs/CONFLUENCEAuthConfig.md)
+ - [CONFLUENCEConfig](docs/CONFLUENCEConfig.md)
+ - [Capella](docs/Capella.md)
+ - [Capella1](docs/Capella1.md)
+ - [Capella2](docs/Capella2.md)
+ - [Confluence](docs/Confluence.md)
+ - [Confluence1](docs/Confluence1.md)
+ - [Confluence2](docs/Confluence2.md)
+ - [CreateAIPlatformConnector](docs/CreateAIPlatformConnector.md)
+ - [CreateAIPlatformConnectorRequestInner](docs/CreateAIPlatformConnectorRequestInner.md)
+ - [CreateAIPlatformConnectorResponse](docs/CreateAIPlatformConnectorResponse.md)
+ - [CreateDestinationConnector](docs/CreateDestinationConnector.md)
+ - [CreateDestinationConnectorRequestInner](docs/CreateDestinationConnectorRequestInner.md)
+ - [CreateDestinationConnectorResponse](docs/CreateDestinationConnectorResponse.md)
+ - [CreatePipelineResponse](docs/CreatePipelineResponse.md)
+ - [CreatePipelineResponseData](docs/CreatePipelineResponseData.md)
+ - [CreateSourceConnector](docs/CreateSourceConnector.md)
+ - [CreateSourceConnectorRequestInner](docs/CreateSourceConnectorRequestInner.md)
+ - [CreateSourceConnectorResponse](docs/CreateSourceConnectorResponse.md)
+ - [CreatedAIPlatformConnector](docs/CreatedAIPlatformConnector.md)
+ - [CreatedDestinationConnector](docs/CreatedDestinationConnector.md)
+ - [CreatedSourceConnector](docs/CreatedSourceConnector.md)
+ - [DATASTAXAuthConfig](docs/DATASTAXAuthConfig.md)
+ - [DATASTAXConfig](docs/DATASTAXConfig.md)
+ - [DISCORDAuthConfig](docs/DISCORDAuthConfig.md)
+ - [DISCORDConfig](docs/DISCORDConfig.md)
+ - [DROPBOXAuthConfig](docs/DROPBOXAuthConfig.md)
+ - [DROPBOXConfig](docs/DROPBOXConfig.md)
+ - [DROPBOXOAUTHAuthConfig](docs/DROPBOXOAUTHAuthConfig.md)
+ - [DROPBOXOAUTHMULTIAuthConfig](docs/DROPBOXOAUTHMULTIAuthConfig.md)
+ - [DROPBOXOAUTHMULTICUSTOMAuthConfig](docs/DROPBOXOAUTHMULTICUSTOMAuthConfig.md)
+ - [Datastax](docs/Datastax.md)
+ - [Datastax1](docs/Datastax1.md)
+ - [Datastax2](docs/Datastax2.md)
+ - [DeepResearchResult](docs/DeepResearchResult.md)
+ - [DeleteAIPlatformConnectorResponse](docs/DeleteAIPlatformConnectorResponse.md)
+ - [DeleteDestinationConnectorResponse](docs/DeleteDestinationConnectorResponse.md)
+ - [DeleteFileResponse](docs/DeleteFileResponse.md)
+ - [DeletePipelineResponse](docs/DeletePipelineResponse.md)
+ - [DeleteSourceConnectorResponse](docs/DeleteSourceConnectorResponse.md)
+ - [DestinationConnector](docs/DestinationConnector.md)
+ - [DestinationConnectorInput](docs/DestinationConnectorInput.md)
+ - [DestinationConnectorInputConfig](docs/DestinationConnectorInputConfig.md)
+ - [DestinationConnectorSchema](docs/DestinationConnectorSchema.md)
+ - [DestinationConnectorType](docs/DestinationConnectorType.md)
+ - [Discord](docs/Discord.md)
+ - [Discord1](docs/Discord1.md)
+ - [Discord2](docs/Discord2.md)
+ - [Document](docs/Document.md)
+ - [Dropbox](docs/Dropbox.md)
+ - [Dropbox1](docs/Dropbox1.md)
+ - [Dropbox2](docs/Dropbox2.md)
+ - [DropboxOauth](docs/DropboxOauth.md)
+ - [DropboxOauth1](docs/DropboxOauth1.md)
+ - [DropboxOauth2](docs/DropboxOauth2.md)
+ - [DropboxOauthMulti](docs/DropboxOauthMulti.md)
+ - [DropboxOauthMulti1](docs/DropboxOauthMulti1.md)
+ - [DropboxOauthMulti2](docs/DropboxOauthMulti2.md)
+ - [DropboxOauthMultiCustom](docs/DropboxOauthMultiCustom.md)
+ - [DropboxOauthMultiCustom1](docs/DropboxOauthMultiCustom1.md)
+ - [DropboxOauthMultiCustom2](docs/DropboxOauthMultiCustom2.md)
+ - [ELASTICAuthConfig](docs/ELASTICAuthConfig.md)
+ - [ELASTICConfig](docs/ELASTICConfig.md)
+ - [Elastic](docs/Elastic.md)
+ - [Elastic1](docs/Elastic1.md)
+ - [Elastic2](docs/Elastic2.md)
+ - [ExtractionChunkingStrategy](docs/ExtractionChunkingStrategy.md)
+ - [ExtractionResult](docs/ExtractionResult.md)
+ - [ExtractionResultResponse](docs/ExtractionResultResponse.md)
+ - [ExtractionType](docs/ExtractionType.md)
+ - [FILEUPLOADAuthConfig](docs/FILEUPLOADAuthConfig.md)
+ - [FIRECRAWLAuthConfig](docs/FIRECRAWLAuthConfig.md)
+ - [FIRECRAWLConfig](docs/FIRECRAWLConfig.md)
+ - [FIREFLIESAuthConfig](docs/FIREFLIESAuthConfig.md)
+ - [FIREFLIESConfig](docs/FIREFLIESConfig.md)
+ - [FileUpload](docs/FileUpload.md)
+ - [FileUpload1](docs/FileUpload1.md)
+ - [FileUpload2](docs/FileUpload2.md)
+ - [Firecrawl](docs/Firecrawl.md)
+ - [Firecrawl1](docs/Firecrawl1.md)
+ - [Firecrawl2](docs/Firecrawl2.md)
+ - [Fireflies](docs/Fireflies.md)
+ - [Fireflies1](docs/Fireflies1.md)
+ - [Fireflies2](docs/Fireflies2.md)
+ - [GCSAuthConfig](docs/GCSAuthConfig.md)
+ - [GCSConfig](docs/GCSConfig.md)
+ - [GITHUBAuthConfig](docs/GITHUBAuthConfig.md)
+ - [GITHUBConfig](docs/GITHUBConfig.md)
+ - [GOOGLEDRIVEAuthConfig](docs/GOOGLEDRIVEAuthConfig.md)
+ - [GOOGLEDRIVEConfig](docs/GOOGLEDRIVEConfig.md)
+ - [GOOGLEDRIVEOAUTHAuthConfig](docs/GOOGLEDRIVEOAUTHAuthConfig.md)
+ - [GOOGLEDRIVEOAUTHConfig](docs/GOOGLEDRIVEOAUTHConfig.md)
+ - [GOOGLEDRIVEOAUTHMULTIAuthConfig](docs/GOOGLEDRIVEOAUTHMULTIAuthConfig.md)
+ - [GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig](docs/GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.md)
+ - [GOOGLEDRIVEOAUTHMULTICUSTOMConfig](docs/GOOGLEDRIVEOAUTHMULTICUSTOMConfig.md)
+ - [GOOGLEDRIVEOAUTHMULTIConfig](docs/GOOGLEDRIVEOAUTHMULTIConfig.md)
+ - [GetAIPlatformConnectors200Response](docs/GetAIPlatformConnectors200Response.md)
+ - [GetDeepResearchResponse](docs/GetDeepResearchResponse.md)
+ - [GetDestinationConnectors200Response](docs/GetDestinationConnectors200Response.md)
+ - [GetPipelineEventsResponse](docs/GetPipelineEventsResponse.md)
+ - [GetPipelineMetricsResponse](docs/GetPipelineMetricsResponse.md)
+ - [GetPipelineResponse](docs/GetPipelineResponse.md)
+ - [GetPipelines400Response](docs/GetPipelines400Response.md)
+ - [GetPipelinesResponse](docs/GetPipelinesResponse.md)
+ - [GetSourceConnectors200Response](docs/GetSourceConnectors200Response.md)
+ - [GetUploadFilesResponse](docs/GetUploadFilesResponse.md)
+ - [Github](docs/Github.md)
+ - [Github1](docs/Github1.md)
+ - [Github2](docs/Github2.md)
+ - [GoogleCloudStorage](docs/GoogleCloudStorage.md)
+ - [GoogleCloudStorage1](docs/GoogleCloudStorage1.md)
+ - [GoogleCloudStorage2](docs/GoogleCloudStorage2.md)
+ - [GoogleDrive](docs/GoogleDrive.md)
+ - [GoogleDrive1](docs/GoogleDrive1.md)
+ - [GoogleDrive2](docs/GoogleDrive2.md)
+ - [GoogleDriveOAuth](docs/GoogleDriveOAuth.md)
+ - [GoogleDriveOAuth1](docs/GoogleDriveOAuth1.md)
+ - [GoogleDriveOAuth2](docs/GoogleDriveOAuth2.md)
+ - [GoogleDriveOauthMulti](docs/GoogleDriveOauthMulti.md)
+ - [GoogleDriveOauthMulti1](docs/GoogleDriveOauthMulti1.md)
+ - [GoogleDriveOauthMulti2](docs/GoogleDriveOauthMulti2.md)
+ - [GoogleDriveOauthMultiCustom](docs/GoogleDriveOauthMultiCustom.md)
+ - [GoogleDriveOauthMultiCustom1](docs/GoogleDriveOauthMultiCustom1.md)
+ - [GoogleDriveOauthMultiCustom2](docs/GoogleDriveOauthMultiCustom2.md)
+ - [INTERCOMAuthConfig](docs/INTERCOMAuthConfig.md)
+ - [INTERCOMConfig](docs/INTERCOMConfig.md)
+ - [Intercom](docs/Intercom.md)
+ - [Intercom1](docs/Intercom1.md)
+ - [Intercom2](docs/Intercom2.md)
+ - [MILVUSAuthConfig](docs/MILVUSAuthConfig.md)
+ - [MILVUSConfig](docs/MILVUSConfig.md)
+ - [MetadataExtractionStrategy](docs/MetadataExtractionStrategy.md)
+ - [MetadataExtractionStrategySchema](docs/MetadataExtractionStrategySchema.md)
+ - [Milvus](docs/Milvus.md)
+ - [Milvus1](docs/Milvus1.md)
+ - [Milvus2](docs/Milvus2.md)
+ - [N8NConfig](docs/N8NConfig.md)
+ - [NOTIONAuthConfig](docs/NOTIONAuthConfig.md)
+ - [NOTIONConfig](docs/NOTIONConfig.md)
+ - [NOTIONOAUTHMULTIAuthConfig](docs/NOTIONOAUTHMULTIAuthConfig.md)
+ - [NOTIONOAUTHMULTICUSTOMAuthConfig](docs/NOTIONOAUTHMULTICUSTOMAuthConfig.md)
+ - [Notion](docs/Notion.md)
+ - [Notion1](docs/Notion1.md)
+ - [Notion2](docs/Notion2.md)
+ - [NotionOauthMulti](docs/NotionOauthMulti.md)
+ - [NotionOauthMulti1](docs/NotionOauthMulti1.md)
+ - [NotionOauthMulti2](docs/NotionOauthMulti2.md)
+ - [NotionOauthMultiCustom](docs/NotionOauthMultiCustom.md)
+ - [NotionOauthMultiCustom1](docs/NotionOauthMultiCustom1.md)
+ - [NotionOauthMultiCustom2](docs/NotionOauthMultiCustom2.md)
+ - [ONEDRIVEAuthConfig](docs/ONEDRIVEAuthConfig.md)
+ - [ONEDRIVEConfig](docs/ONEDRIVEConfig.md)
+ - [OPENAIAuthConfig](docs/OPENAIAuthConfig.md)
+ - [OneDrive](docs/OneDrive.md)
+ - [OneDrive1](docs/OneDrive1.md)
+ - [OneDrive2](docs/OneDrive2.md)
+ - [Openai](docs/Openai.md)
+ - [Openai1](docs/Openai1.md)
+ - [Openai2](docs/Openai2.md)
+ - [PINECONEAuthConfig](docs/PINECONEAuthConfig.md)
+ - [PINECONEConfig](docs/PINECONEConfig.md)
+ - [POSTGRESQLAuthConfig](docs/POSTGRESQLAuthConfig.md)
+ - [POSTGRESQLConfig](docs/POSTGRESQLConfig.md)
+ - [Pinecone](docs/Pinecone.md)
+ - [Pinecone1](docs/Pinecone1.md)
+ - [Pinecone2](docs/Pinecone2.md)
+ - [PipelineAIPlatformRequestInner](docs/PipelineAIPlatformRequestInner.md)
+ - [PipelineConfigurationSchema](docs/PipelineConfigurationSchema.md)
+ - [PipelineDestinationConnectorRequestInner](docs/PipelineDestinationConnectorRequestInner.md)
+ - [PipelineEvents](docs/PipelineEvents.md)
+ - [PipelineListSummary](docs/PipelineListSummary.md)
+ - [PipelineMetrics](docs/PipelineMetrics.md)
+ - [PipelineSourceConnectorRequestInner](docs/PipelineSourceConnectorRequestInner.md)
+ - [PipelineSummary](docs/PipelineSummary.md)
+ - [Postgresql](docs/Postgresql.md)
+ - [Postgresql1](docs/Postgresql1.md)
+ - [Postgresql2](docs/Postgresql2.md)
+ - [QDRANTAuthConfig](docs/QDRANTAuthConfig.md)
+ - [QDRANTConfig](docs/QDRANTConfig.md)
+ - [Qdrant](docs/Qdrant.md)
+ - [Qdrant1](docs/Qdrant1.md)
+ - [Qdrant2](docs/Qdrant2.md)
+ - [RemoveUserFromSourceConnectorRequest](docs/RemoveUserFromSourceConnectorRequest.md)
+ - [RemoveUserFromSourceConnectorResponse](docs/RemoveUserFromSourceConnectorResponse.md)
+ - [RetrieveContext](docs/RetrieveContext.md)
+ - [RetrieveContextMessage](docs/RetrieveContextMessage.md)
+ - [RetrieveDocumentsRequest](docs/RetrieveDocumentsRequest.md)
+ - [RetrieveDocumentsResponse](docs/RetrieveDocumentsResponse.md)
+ - [SHAREPOINTAuthConfig](docs/SHAREPOINTAuthConfig.md)
+ - [SHAREPOINTConfig](docs/SHAREPOINTConfig.md)
+ - [SINGLESTOREAuthConfig](docs/SINGLESTOREAuthConfig.md)
+ - [SINGLESTOREConfig](docs/SINGLESTOREConfig.md)
+ - [SUPABASEAuthConfig](docs/SUPABASEAuthConfig.md)
+ - [SUPABASEConfig](docs/SUPABASEConfig.md)
+ - [ScheduleSchema](docs/ScheduleSchema.md)
+ - [ScheduleSchemaType](docs/ScheduleSchemaType.md)
+ - [Sharepoint](docs/Sharepoint.md)
+ - [Sharepoint1](docs/Sharepoint1.md)
+ - [Sharepoint2](docs/Sharepoint2.md)
+ - [Singlestore](docs/Singlestore.md)
+ - [Singlestore1](docs/Singlestore1.md)
+ - [Singlestore2](docs/Singlestore2.md)
+ - [SourceConnector](docs/SourceConnector.md)
+ - [SourceConnectorInput](docs/SourceConnectorInput.md)
+ - [SourceConnectorInputConfig](docs/SourceConnectorInputConfig.md)
+ - [SourceConnectorSchema](docs/SourceConnectorSchema.md)
+ - [SourceConnectorType](docs/SourceConnectorType.md)
+ - [StartDeepResearchRequest](docs/StartDeepResearchRequest.md)
+ - [StartDeepResearchResponse](docs/StartDeepResearchResponse.md)
+ - [StartExtractionRequest](docs/StartExtractionRequest.md)
+ - [StartExtractionResponse](docs/StartExtractionResponse.md)
+ - [StartFileUploadRequest](docs/StartFileUploadRequest.md)
+ - [StartFileUploadResponse](docs/StartFileUploadResponse.md)
+ - [StartFileUploadToConnectorRequest](docs/StartFileUploadToConnectorRequest.md)
+ - [StartFileUploadToConnectorResponse](docs/StartFileUploadToConnectorResponse.md)
+ - [StartPipelineResponse](docs/StartPipelineResponse.md)
+ - [StopPipelineResponse](docs/StopPipelineResponse.md)
+ - [Supabase](docs/Supabase.md)
+ - [Supabase1](docs/Supabase1.md)
+ - [Supabase2](docs/Supabase2.md)
+ - [TURBOPUFFERAuthConfig](docs/TURBOPUFFERAuthConfig.md)
+ - [TURBOPUFFERConfig](docs/TURBOPUFFERConfig.md)
+ - [Turbopuffer](docs/Turbopuffer.md)
+ - [Turbopuffer1](docs/Turbopuffer1.md)
+ - [Turbopuffer2](docs/Turbopuffer2.md)
+ - [UpdateAIPlatformConnectorRequest](docs/UpdateAIPlatformConnectorRequest.md)
+ - [UpdateAIPlatformConnectorResponse](docs/UpdateAIPlatformConnectorResponse.md)
+ - [UpdateAiplatformConnectorRequest](docs/UpdateAiplatformConnectorRequest.md)
+ - [UpdateDestinationConnectorRequest](docs/UpdateDestinationConnectorRequest.md)
+ - [UpdateDestinationConnectorResponse](docs/UpdateDestinationConnectorResponse.md)
+ - [UpdateSourceConnectorRequest](docs/UpdateSourceConnectorRequest.md)
+ - [UpdateSourceConnectorResponse](docs/UpdateSourceConnectorResponse.md)
+ - [UpdateSourceConnectorResponseData](docs/UpdateSourceConnectorResponseData.md)
+ - [UpdateUserInSourceConnectorRequest](docs/UpdateUserInSourceConnectorRequest.md)
+ - [UpdateUserInSourceConnectorResponse](docs/UpdateUserInSourceConnectorResponse.md)
+ - [UpdatedAIPlatformConnectorData](docs/UpdatedAIPlatformConnectorData.md)
+ - [UpdatedDestinationConnectorData](docs/UpdatedDestinationConnectorData.md)
+ - [UploadFile](docs/UploadFile.md)
+ - [VERTEXAuthConfig](docs/VERTEXAuthConfig.md)
+ - [VOYAGEAuthConfig](docs/VOYAGEAuthConfig.md)
+ - [Vertex](docs/Vertex.md)
+ - [Vertex1](docs/Vertex1.md)
+ - [Vertex2](docs/Vertex2.md)
+ - [Voyage](docs/Voyage.md)
+ - [Voyage1](docs/Voyage1.md)
+ - [Voyage2](docs/Voyage2.md)
+ - [WEAVIATEAuthConfig](docs/WEAVIATEAuthConfig.md)
+ - [WEAVIATEConfig](docs/WEAVIATEConfig.md)
+ - [WEBCRAWLERAuthConfig](docs/WEBCRAWLERAuthConfig.md)
+ - [WEBCRAWLERConfig](docs/WEBCRAWLERConfig.md)
+ - [Weaviate](docs/Weaviate.md)
+ - [Weaviate1](docs/Weaviate1.md)
+ - [Weaviate2](docs/Weaviate2.md)
+ - [WebCrawler](docs/WebCrawler.md)
+ - [WebCrawler1](docs/WebCrawler1.md)
+ - [WebCrawler2](docs/WebCrawler2.md)
+
+
+
+## Documentation For Authorization
+
+
+Authentication schemes defined for the API:
+
+### bearerAuth
+
+- **Type**: Bearer authentication (JWT)
+
+
+## Author
+
+
diff --git a/docs/AIPlatform.md b/docs/AIPlatform.md
new file mode 100644
index 0000000..96388e6
--- /dev/null
+++ b/docs/AIPlatform.md
@@ -0,0 +1,39 @@
+# AIPlatform
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | **str** | |
+**name** | **str** | |
+**config_doc** | **Dict[str, Optional[object]]** | | [optional]
+**created_at** | **str** | | [optional]
+**created_by_id** | **str** | | [optional]
+**last_updated_by_id** | **str** | | [optional]
+**created_by_email** | **str** | | [optional]
+**last_updated_by_email** | **str** | | [optional]
+**error_message** | **str** | | [optional]
+**verification_status** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.ai_platform import AIPlatform
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AIPlatform from a JSON string
+ai_platform_instance = AIPlatform.from_json(json)
+# print the JSON string representation of the object
+print(AIPlatform.to_json())
+
+# convert the object into a dict
+ai_platform_dict = ai_platform_instance.to_dict()
+# create an instance of AIPlatform from a dict
+ai_platform_from_dict = AIPlatform.from_dict(ai_platform_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AIPlatformConfigSchema.md b/docs/AIPlatformConfigSchema.md
new file mode 100644
index 0000000..6291b72
--- /dev/null
+++ b/docs/AIPlatformConfigSchema.md
@@ -0,0 +1,34 @@
+# AIPlatformConfigSchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**embedding_model** | **str** | | [optional]
+**chunking_strategy** | **str** | | [optional]
+**chunk_size** | **int** | | [optional]
+**chunk_overlap** | **int** | | [optional]
+**dimensions** | **int** | | [optional]
+**extraction_strategy** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.ai_platform_config_schema import AIPlatformConfigSchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AIPlatformConfigSchema from a JSON string
+ai_platform_config_schema_instance = AIPlatformConfigSchema.from_json(json)
+# print the JSON string representation of the object
+print(AIPlatformConfigSchema.to_json())
+
+# convert the object into a dict
+ai_platform_config_schema_dict = ai_platform_config_schema_instance.to_dict()
+# create an instance of AIPlatformConfigSchema from a dict
+ai_platform_config_schema_from_dict = AIPlatformConfigSchema.from_dict(ai_platform_config_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AIPlatformInput.md b/docs/AIPlatformInput.md
new file mode 100644
index 0000000..6579a83
--- /dev/null
+++ b/docs/AIPlatformInput.md
@@ -0,0 +1,32 @@
+# AIPlatformInput
+
+AI platform configuration
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the AI platform |
+**type** | **str** | Type of AI platform |
+**config** | **object** | Configuration specific to the AI platform |
+
+## Example
+
+```python
+from vectorize_client.models.ai_platform_input import AIPlatformInput
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AIPlatformInput from a JSON string
+ai_platform_input_instance = AIPlatformInput.from_json(json)
+# print the JSON string representation of the object
+print(AIPlatformInput.to_json())
+
+# convert the object into a dict
+ai_platform_input_dict = ai_platform_input_instance.to_dict()
+# create an instance of AIPlatformInput from a dict
+ai_platform_input_from_dict = AIPlatformInput.from_dict(ai_platform_input_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AIPlatformSchema.md b/docs/AIPlatformSchema.md
new file mode 100644
index 0000000..995b683
--- /dev/null
+++ b/docs/AIPlatformSchema.md
@@ -0,0 +1,31 @@
+# AIPlatformSchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | [**AIPlatformType**](AIPlatformType.md) | |
+**config** | [**AIPlatformConfigSchema**](AIPlatformConfigSchema.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.ai_platform_schema import AIPlatformSchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AIPlatformSchema from a JSON string
+ai_platform_schema_instance = AIPlatformSchema.from_json(json)
+# print the JSON string representation of the object
+print(AIPlatformSchema.to_json())
+
+# convert the object into a dict
+ai_platform_schema_dict = ai_platform_schema_instance.to_dict()
+# create an instance of AIPlatformSchema from a dict
+ai_platform_schema_from_dict = AIPlatformSchema.from_dict(ai_platform_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AIPlatformType.md b/docs/AIPlatformType.md
new file mode 100644
index 0000000..e289a27
--- /dev/null
+++ b/docs/AIPlatformType.md
@@ -0,0 +1,16 @@
+# AIPlatformType
+
+
+## Enum
+
+* `BEDROCK` (value: `'BEDROCK'`)
+
+* `VERTEX` (value: `'VERTEX'`)
+
+* `OPENAI` (value: `'OPENAI'`)
+
+* `VOYAGE` (value: `'VOYAGE'`)
+
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AWSS3AuthConfig.md b/docs/AWSS3AuthConfig.md
new file mode 100644
index 0000000..abf8eeb
--- /dev/null
+++ b/docs/AWSS3AuthConfig.md
@@ -0,0 +1,36 @@
+# AWSS3AuthConfig
+
+Authentication configuration for Amazon S3
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**access_key** | **str** | Access Key. Example: Enter Access Key |
+**secret_key** | **str** | Secret Key. Example: Enter Secret Key |
+**bucket_name** | **str** | Bucket Name. Example: Enter your S3 Bucket Name |
+**endpoint** | **str** | Endpoint. Example: Enter Endpoint URL | [optional]
+**region** | **str** | Region. Example: Region Name | [optional]
+**archiver** | **bool** | Allow as archive destination | [default to False]
+
+## Example
+
+```python
+from vectorize_client.models.awss3_auth_config import AWSS3AuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AWSS3AuthConfig from a JSON string
+awss3_auth_config_instance = AWSS3AuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(AWSS3AuthConfig.to_json())
+
+# convert the object into a dict
+awss3_auth_config_dict = awss3_auth_config_instance.to_dict()
+# create an instance of AWSS3AuthConfig from a dict
+awss3_auth_config_from_dict = AWSS3AuthConfig.from_dict(awss3_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AWSS3Config.md b/docs/AWSS3Config.md
new file mode 100644
index 0000000..f0e8fb9
--- /dev/null
+++ b/docs/AWSS3Config.md
@@ -0,0 +1,35 @@
+# AWSS3Config
+
+Configuration for Amazon S3 connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Check for updates every (seconds) | [default to 5]
+**recursive** | **bool** | Recursively scan all folders in the bucket | [optional]
+**path_prefix** | **str** | Path Prefix | [optional]
+**path_metadata_regex** | **str** | Path Metadata Regex | [optional]
+**path_regex_group_names** | **str** | Path Regex Group Names. Example: Enter Group Name | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.awss3_config import AWSS3Config
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AWSS3Config from a JSON string
+awss3_config_instance = AWSS3Config.from_json(json)
+# print the JSON string representation of the object
+print(AWSS3Config.to_json())
+
+# convert the object into a dict
+awss3_config_dict = awss3_config_instance.to_dict()
+# create an instance of AWSS3Config from a dict
+awss3_config_from_dict = AWSS3Config.from_dict(awss3_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AZUREAISEARCHAuthConfig.md b/docs/AZUREAISEARCHAuthConfig.md
new file mode 100644
index 0000000..548a08f
--- /dev/null
+++ b/docs/AZUREAISEARCHAuthConfig.md
@@ -0,0 +1,32 @@
+# AZUREAISEARCHAuthConfig
+
+Authentication configuration for Azure AI Search
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Azure AI Search integration |
+**service_name** | **str** | Azure AI Search Service Name. Example: Enter your Azure AI Search service name |
+**api_key** | **str** | API Key. Example: Enter your API key |
+
+## Example
+
+```python
+from vectorize_client.models.azureaisearch_auth_config import AZUREAISEARCHAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AZUREAISEARCHAuthConfig from a JSON string
+azureaisearch_auth_config_instance = AZUREAISEARCHAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(AZUREAISEARCHAuthConfig.to_json())
+
+# convert the object into a dict
+azureaisearch_auth_config_dict = azureaisearch_auth_config_instance.to_dict()
+# create an instance of AZUREAISEARCHAuthConfig from a dict
+azureaisearch_auth_config_from_dict = AZUREAISEARCHAuthConfig.from_dict(azureaisearch_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AZUREAISEARCHConfig.md b/docs/AZUREAISEARCHConfig.md
new file mode 100644
index 0000000..9fc02f5
--- /dev/null
+++ b/docs/AZUREAISEARCHConfig.md
@@ -0,0 +1,30 @@
+# AZUREAISEARCHConfig
+
+Configuration for Azure AI Search connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**index** | **str** | Index Name. Example: Enter index name |
+
+## Example
+
+```python
+from vectorize_client.models.azureaisearch_config import AZUREAISEARCHConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AZUREAISEARCHConfig from a JSON string
+azureaisearch_config_instance = AZUREAISEARCHConfig.from_json(json)
+# print the JSON string representation of the object
+print(AZUREAISEARCHConfig.to_json())
+
+# convert the object into a dict
+azureaisearch_config_dict = azureaisearch_config_instance.to_dict()
+# create an instance of AZUREAISEARCHConfig from a dict
+azureaisearch_config_from_dict = AZUREAISEARCHConfig.from_dict(azureaisearch_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AZUREBLOBAuthConfig.md b/docs/AZUREBLOBAuthConfig.md
new file mode 100644
index 0000000..089628d
--- /dev/null
+++ b/docs/AZUREBLOBAuthConfig.md
@@ -0,0 +1,34 @@
+# AZUREBLOBAuthConfig
+
+Authentication configuration for Azure Blob Storage
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**storage_account_name** | **str** | Storage Account Name. Example: Enter Storage Account Name |
+**storage_account_key** | **str** | Storage Account Key. Example: Enter Storage Account Key |
+**container** | **str** | Container. Example: Enter Container Name |
+**endpoint** | **str** | Endpoint. Example: Enter Endpoint URL | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.azureblob_auth_config import AZUREBLOBAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AZUREBLOBAuthConfig from a JSON string
+azureblob_auth_config_instance = AZUREBLOBAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(AZUREBLOBAuthConfig.to_json())
+
+# convert the object into a dict
+azureblob_auth_config_dict = azureblob_auth_config_instance.to_dict()
+# create an instance of AZUREBLOBAuthConfig from a dict
+azureblob_auth_config_from_dict = AZUREBLOBAuthConfig.from_dict(azureblob_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AZUREBLOBConfig.md b/docs/AZUREBLOBConfig.md
new file mode 100644
index 0000000..e98e6aa
--- /dev/null
+++ b/docs/AZUREBLOBConfig.md
@@ -0,0 +1,35 @@
+# AZUREBLOBConfig
+
+Configuration for Azure Blob Storage connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Polling Interval (seconds) | [default to 5]
+**recursive** | **bool** | Recursively scan all folders in the bucket | [optional]
+**path_prefix** | **str** | Path Prefix | [optional]
+**path_metadata_regex** | **str** | Path Metadata Regex | [optional]
+**path_regex_group_names** | **str** | Path Regex Group Names. Example: Enter Group Name | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.azureblob_config import AZUREBLOBConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AZUREBLOBConfig from a JSON string
+azureblob_config_instance = AZUREBLOBConfig.from_json(json)
+# print the JSON string representation of the object
+print(AZUREBLOBConfig.to_json())
+
+# convert the object into a dict
+azureblob_config_dict = azureblob_config_instance.to_dict()
+# create an instance of AZUREBLOBConfig from a dict
+azureblob_config_from_dict = AZUREBLOBConfig.from_dict(azureblob_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AddUserFromSourceConnectorResponse.md b/docs/AddUserFromSourceConnectorResponse.md
new file mode 100644
index 0000000..839f8df
--- /dev/null
+++ b/docs/AddUserFromSourceConnectorResponse.md
@@ -0,0 +1,29 @@
+# AddUserFromSourceConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.add_user_from_source_connector_response import AddUserFromSourceConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AddUserFromSourceConnectorResponse from a JSON string
+add_user_from_source_connector_response_instance = AddUserFromSourceConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(AddUserFromSourceConnectorResponse.to_json())
+
+# convert the object into a dict
+add_user_from_source_connector_response_dict = add_user_from_source_connector_response_instance.to_dict()
+# create an instance of AddUserFromSourceConnectorResponse from a dict
+add_user_from_source_connector_response_from_dict = AddUserFromSourceConnectorResponse.from_dict(add_user_from_source_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AddUserToSourceConnectorRequest.md b/docs/AddUserToSourceConnectorRequest.md
new file mode 100644
index 0000000..3768a06
--- /dev/null
+++ b/docs/AddUserToSourceConnectorRequest.md
@@ -0,0 +1,32 @@
+# AddUserToSourceConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**user_id** | **str** | |
+**selected_files** | [**AddUserToSourceConnectorRequestSelectedFiles**](AddUserToSourceConnectorRequestSelectedFiles.md) | |
+**refresh_token** | **str** | | [optional]
+**access_token** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.add_user_to_source_connector_request import AddUserToSourceConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AddUserToSourceConnectorRequest from a JSON string
+add_user_to_source_connector_request_instance = AddUserToSourceConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(AddUserToSourceConnectorRequest.to_json())
+
+# convert the object into a dict
+add_user_to_source_connector_request_dict = add_user_to_source_connector_request_instance.to_dict()
+# create an instance of AddUserToSourceConnectorRequest from a dict
+add_user_to_source_connector_request_from_dict = AddUserToSourceConnectorRequest.from_dict(add_user_to_source_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AddUserToSourceConnectorRequestSelectedFiles.md b/docs/AddUserToSourceConnectorRequestSelectedFiles.md
new file mode 100644
index 0000000..9526870
--- /dev/null
+++ b/docs/AddUserToSourceConnectorRequestSelectedFiles.md
@@ -0,0 +1,30 @@
+# AddUserToSourceConnectorRequestSelectedFiles
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**page_ids** | **List[str]** | | [optional]
+**database_ids** | **List[str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.add_user_to_source_connector_request_selected_files import AddUserToSourceConnectorRequestSelectedFiles
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AddUserToSourceConnectorRequestSelectedFiles from a JSON string
+add_user_to_source_connector_request_selected_files_instance = AddUserToSourceConnectorRequestSelectedFiles.from_json(json)
+# print the JSON string representation of the object
+print(AddUserToSourceConnectorRequestSelectedFiles.to_json())
+
+# convert the object into a dict
+add_user_to_source_connector_request_selected_files_dict = add_user_to_source_connector_request_selected_files_instance.to_dict()
+# create an instance of AddUserToSourceConnectorRequestSelectedFiles from a dict
+add_user_to_source_connector_request_selected_files_from_dict = AddUserToSourceConnectorRequestSelectedFiles.from_dict(add_user_to_source_connector_request_selected_files_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AddUserToSourceConnectorRequestSelectedFilesAnyOf.md b/docs/AddUserToSourceConnectorRequestSelectedFilesAnyOf.md
new file mode 100644
index 0000000..d5531c9
--- /dev/null
+++ b/docs/AddUserToSourceConnectorRequestSelectedFilesAnyOf.md
@@ -0,0 +1,30 @@
+# AddUserToSourceConnectorRequestSelectedFilesAnyOf
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**page_ids** | **List[str]** | | [optional]
+**database_ids** | **List[str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.add_user_to_source_connector_request_selected_files_any_of import AddUserToSourceConnectorRequestSelectedFilesAnyOf
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AddUserToSourceConnectorRequestSelectedFilesAnyOf from a JSON string
+add_user_to_source_connector_request_selected_files_any_of_instance = AddUserToSourceConnectorRequestSelectedFilesAnyOf.from_json(json)
+# print the JSON string representation of the object
+print(AddUserToSourceConnectorRequestSelectedFilesAnyOf.to_json())
+
+# convert the object into a dict
+add_user_to_source_connector_request_selected_files_any_of_dict = add_user_to_source_connector_request_selected_files_any_of_instance.to_dict()
+# create an instance of AddUserToSourceConnectorRequestSelectedFilesAnyOf from a dict
+add_user_to_source_connector_request_selected_files_any_of_from_dict = AddUserToSourceConnectorRequestSelectedFilesAnyOf.from_dict(add_user_to_source_connector_request_selected_files_any_of_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.md b/docs/AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.md
new file mode 100644
index 0000000..a0ad6e8
--- /dev/null
+++ b/docs/AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.md
@@ -0,0 +1,30 @@
+# AddUserToSourceConnectorRequestSelectedFilesAnyOfValue
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**mime_type** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.add_user_to_source_connector_request_selected_files_any_of_value import AddUserToSourceConnectorRequestSelectedFilesAnyOfValue
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AddUserToSourceConnectorRequestSelectedFilesAnyOfValue from a JSON string
+add_user_to_source_connector_request_selected_files_any_of_value_instance = AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.from_json(json)
+# print the JSON string representation of the object
+print(AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.to_json())
+
+# convert the object into a dict
+add_user_to_source_connector_request_selected_files_any_of_value_dict = add_user_to_source_connector_request_selected_files_any_of_value_instance.to_dict()
+# create an instance of AddUserToSourceConnectorRequestSelectedFilesAnyOfValue from a dict
+add_user_to_source_connector_request_selected_files_any_of_value_from_dict = AddUserToSourceConnectorRequestSelectedFilesAnyOfValue.from_dict(add_user_to_source_connector_request_selected_files_any_of_value_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AdvancedQuery.md b/docs/AdvancedQuery.md
new file mode 100644
index 0000000..4c0c6d4
--- /dev/null
+++ b/docs/AdvancedQuery.md
@@ -0,0 +1,34 @@
+# AdvancedQuery
+
+Advanced query parameters for enhanced search capabilities.
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**mode** | **str** | Search mode: 'text', 'vector', or 'hybrid'. Defaults to 'vector' if not specified. | [optional]
+**text_fields** | **List[str]** | Fields to perform text search on. | [optional]
+**match_type** | **str** | Type of text match to perform. | [optional]
+**text_boost** | **float** | Multiplier for text search scores. | [optional]
+**filters** | **object** | Elasticsearch-compatible filter object. | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.advanced_query import AdvancedQuery
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AdvancedQuery from a JSON string
+advanced_query_instance = AdvancedQuery.from_json(json)
+# print the JSON string representation of the object
+print(AdvancedQuery.to_json())
+
+# convert the object into a dict
+advanced_query_dict = advanced_query_instance.to_dict()
+# create an instance of AdvancedQuery from a dict
+advanced_query_from_dict = AdvancedQuery.from_dict(advanced_query_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AmazonS3.md b/docs/AmazonS3.md
new file mode 100644
index 0000000..7f752fd
--- /dev/null
+++ b/docs/AmazonS3.md
@@ -0,0 +1,31 @@
+# AmazonS3
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"AWS_S3\") |
+**config** | [**AWSS3Config**](AWSS3Config.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.amazon_s3 import AmazonS3
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AmazonS3 from a JSON string
+amazon_s3_instance = AmazonS3.from_json(json)
+# print the JSON string representation of the object
+print(AmazonS3.to_json())
+
+# convert the object into a dict
+amazon_s3_dict = amazon_s3_instance.to_dict()
+# create an instance of AmazonS3 from a dict
+amazon_s3_from_dict = AmazonS3.from_dict(amazon_s3_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AmazonS31.md b/docs/AmazonS31.md
new file mode 100644
index 0000000..a688a8a
--- /dev/null
+++ b/docs/AmazonS31.md
@@ -0,0 +1,29 @@
+# AmazonS31
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**AWSS3Config**](AWSS3Config.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.amazon_s31 import AmazonS31
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AmazonS31 from a JSON string
+amazon_s31_instance = AmazonS31.from_json(json)
+# print the JSON string representation of the object
+print(AmazonS31.to_json())
+
+# convert the object into a dict
+amazon_s31_dict = amazon_s31_instance.to_dict()
+# create an instance of AmazonS31 from a dict
+amazon_s31_from_dict = AmazonS31.from_dict(amazon_s31_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AmazonS32.md b/docs/AmazonS32.md
new file mode 100644
index 0000000..29c792b
--- /dev/null
+++ b/docs/AmazonS32.md
@@ -0,0 +1,30 @@
+# AmazonS32
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"AWS_S3\") |
+
+## Example
+
+```python
+from vectorize_client.models.amazon_s32 import AmazonS32
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AmazonS32 from a JSON string
+amazon_s32_instance = AmazonS32.from_json(json)
+# print the JSON string representation of the object
+print(AmazonS32.to_json())
+
+# convert the object into a dict
+amazon_s32_dict = amazon_s32_instance.to_dict()
+# create an instance of AmazonS32 from a dict
+amazon_s32_from_dict = AmazonS32.from_dict(amazon_s32_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AzureBlobStorage.md b/docs/AzureBlobStorage.md
new file mode 100644
index 0000000..4d7c3ee
--- /dev/null
+++ b/docs/AzureBlobStorage.md
@@ -0,0 +1,31 @@
+# AzureBlobStorage
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"AZURE_BLOB\") |
+**config** | [**AZUREBLOBConfig**](AZUREBLOBConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.azure_blob_storage import AzureBlobStorage
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AzureBlobStorage from a JSON string
+azure_blob_storage_instance = AzureBlobStorage.from_json(json)
+# print the JSON string representation of the object
+print(AzureBlobStorage.to_json())
+
+# convert the object into a dict
+azure_blob_storage_dict = azure_blob_storage_instance.to_dict()
+# create an instance of AzureBlobStorage from a dict
+azure_blob_storage_from_dict = AzureBlobStorage.from_dict(azure_blob_storage_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AzureBlobStorage1.md b/docs/AzureBlobStorage1.md
new file mode 100644
index 0000000..f6bd884
--- /dev/null
+++ b/docs/AzureBlobStorage1.md
@@ -0,0 +1,29 @@
+# AzureBlobStorage1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**AZUREBLOBConfig**](AZUREBLOBConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.azure_blob_storage1 import AzureBlobStorage1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AzureBlobStorage1 from a JSON string
+azure_blob_storage1_instance = AzureBlobStorage1.from_json(json)
+# print the JSON string representation of the object
+print(AzureBlobStorage1.to_json())
+
+# convert the object into a dict
+azure_blob_storage1_dict = azure_blob_storage1_instance.to_dict()
+# create an instance of AzureBlobStorage1 from a dict
+azure_blob_storage1_from_dict = AzureBlobStorage1.from_dict(azure_blob_storage1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/AzureBlobStorage2.md b/docs/AzureBlobStorage2.md
new file mode 100644
index 0000000..874c5fc
--- /dev/null
+++ b/docs/AzureBlobStorage2.md
@@ -0,0 +1,30 @@
+# AzureBlobStorage2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"AZURE_BLOB\") |
+
+## Example
+
+```python
+from vectorize_client.models.azure_blob_storage2 import AzureBlobStorage2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of AzureBlobStorage2 from a JSON string
+azure_blob_storage2_instance = AzureBlobStorage2.from_json(json)
+# print the JSON string representation of the object
+print(AzureBlobStorage2.to_json())
+
+# convert the object into a dict
+azure_blob_storage2_dict = azure_blob_storage2_instance.to_dict()
+# create an instance of AzureBlobStorage2 from a dict
+azure_blob_storage2_from_dict = AzureBlobStorage2.from_dict(azure_blob_storage2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Azureaisearch.md b/docs/Azureaisearch.md
new file mode 100644
index 0000000..a94540d
--- /dev/null
+++ b/docs/Azureaisearch.md
@@ -0,0 +1,31 @@
+# Azureaisearch
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"AZUREAISEARCH\") |
+**config** | [**AZUREAISEARCHConfig**](AZUREAISEARCHConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.azureaisearch import Azureaisearch
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Azureaisearch from a JSON string
+azureaisearch_instance = Azureaisearch.from_json(json)
+# print the JSON string representation of the object
+print(Azureaisearch.to_json())
+
+# convert the object into a dict
+azureaisearch_dict = azureaisearch_instance.to_dict()
+# create an instance of Azureaisearch from a dict
+azureaisearch_from_dict = Azureaisearch.from_dict(azureaisearch_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Azureaisearch1.md b/docs/Azureaisearch1.md
new file mode 100644
index 0000000..52eb086
--- /dev/null
+++ b/docs/Azureaisearch1.md
@@ -0,0 +1,29 @@
+# Azureaisearch1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**AZUREAISEARCHConfig**](AZUREAISEARCHConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.azureaisearch1 import Azureaisearch1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Azureaisearch1 from a JSON string
+azureaisearch1_instance = Azureaisearch1.from_json(json)
+# print the JSON string representation of the object
+print(Azureaisearch1.to_json())
+
+# convert the object into a dict
+azureaisearch1_dict = azureaisearch1_instance.to_dict()
+# create an instance of Azureaisearch1 from a dict
+azureaisearch1_from_dict = Azureaisearch1.from_dict(azureaisearch1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Azureaisearch2.md b/docs/Azureaisearch2.md
new file mode 100644
index 0000000..12799ab
--- /dev/null
+++ b/docs/Azureaisearch2.md
@@ -0,0 +1,30 @@
+# Azureaisearch2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"AZUREAISEARCH\") |
+
+## Example
+
+```python
+from vectorize_client.models.azureaisearch2 import Azureaisearch2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Azureaisearch2 from a JSON string
+azureaisearch2_instance = Azureaisearch2.from_json(json)
+# print the JSON string representation of the object
+print(Azureaisearch2.to_json())
+
+# convert the object into a dict
+azureaisearch2_dict = azureaisearch2_instance.to_dict()
+# create an instance of Azureaisearch2 from a dict
+azureaisearch2_from_dict = Azureaisearch2.from_dict(azureaisearch2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/BEDROCKAuthConfig.md b/docs/BEDROCKAuthConfig.md
new file mode 100644
index 0000000..bf13fe6
--- /dev/null
+++ b/docs/BEDROCKAuthConfig.md
@@ -0,0 +1,33 @@
+# BEDROCKAuthConfig
+
+Authentication configuration for Amazon Bedrock
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Amazon Bedrock integration |
+**access_key** | **str** | Access Key. Example: Enter your Amazon Bedrock Access Key |
+**key** | **str** | Secret Key. Example: Enter your Amazon Bedrock Secret Key |
+**region** | **str** | Region. Example: Region Name |
+
+## Example
+
+```python
+from vectorize_client.models.bedrock_auth_config import BEDROCKAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of BEDROCKAuthConfig from a JSON string
+bedrock_auth_config_instance = BEDROCKAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(BEDROCKAuthConfig.to_json())
+
+# convert the object into a dict
+bedrock_auth_config_dict = bedrock_auth_config_instance.to_dict()
+# create an instance of BEDROCKAuthConfig from a dict
+bedrock_auth_config_from_dict = BEDROCKAuthConfig.from_dict(bedrock_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Bedrock.md b/docs/Bedrock.md
new file mode 100644
index 0000000..ef2d829
--- /dev/null
+++ b/docs/Bedrock.md
@@ -0,0 +1,31 @@
+# Bedrock
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"BEDROCK\") |
+**config** | [**BEDROCKAuthConfig**](BEDROCKAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.bedrock import Bedrock
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Bedrock from a JSON string
+bedrock_instance = Bedrock.from_json(json)
+# print the JSON string representation of the object
+print(Bedrock.to_json())
+
+# convert the object into a dict
+bedrock_dict = bedrock_instance.to_dict()
+# create an instance of Bedrock from a dict
+bedrock_from_dict = Bedrock.from_dict(bedrock_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Bedrock1.md b/docs/Bedrock1.md
new file mode 100644
index 0000000..69878c7
--- /dev/null
+++ b/docs/Bedrock1.md
@@ -0,0 +1,29 @@
+# Bedrock1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**BEDROCKAuthConfig**](BEDROCKAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.bedrock1 import Bedrock1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Bedrock1 from a JSON string
+bedrock1_instance = Bedrock1.from_json(json)
+# print the JSON string representation of the object
+print(Bedrock1.to_json())
+
+# convert the object into a dict
+bedrock1_dict = bedrock1_instance.to_dict()
+# create an instance of Bedrock1 from a dict
+bedrock1_from_dict = Bedrock1.from_dict(bedrock1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Bedrock2.md b/docs/Bedrock2.md
new file mode 100644
index 0000000..31ea5bf
--- /dev/null
+++ b/docs/Bedrock2.md
@@ -0,0 +1,30 @@
+# Bedrock2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"BEDROCK\") |
+
+## Example
+
+```python
+from vectorize_client.models.bedrock2 import Bedrock2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Bedrock2 from a JSON string
+bedrock2_instance = Bedrock2.from_json(json)
+# print the JSON string representation of the object
+print(Bedrock2.to_json())
+
+# convert the object into a dict
+bedrock2_dict = bedrock2_instance.to_dict()
+# create an instance of Bedrock2 from a dict
+bedrock2_from_dict = Bedrock2.from_dict(bedrock2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CAPELLAAuthConfig.md b/docs/CAPELLAAuthConfig.md
new file mode 100644
index 0000000..e6ace1f
--- /dev/null
+++ b/docs/CAPELLAAuthConfig.md
@@ -0,0 +1,33 @@
+# CAPELLAAuthConfig
+
+Authentication configuration for Couchbase Capella
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Capella integration |
+**username** | **str** | Cluster Access Name. Example: Enter your cluster access name |
+**password** | **str** | Cluster Access Password. Example: Enter your cluster access password |
+**connection_string** | **str** | Connection String. Example: Enter your connection string |
+
+## Example
+
+```python
+from vectorize_client.models.capella_auth_config import CAPELLAAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CAPELLAAuthConfig from a JSON string
+capella_auth_config_instance = CAPELLAAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(CAPELLAAuthConfig.to_json())
+
+# convert the object into a dict
+capella_auth_config_dict = capella_auth_config_instance.to_dict()
+# create an instance of CAPELLAAuthConfig from a dict
+capella_auth_config_from_dict = CAPELLAAuthConfig.from_dict(capella_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CAPELLAConfig.md b/docs/CAPELLAConfig.md
new file mode 100644
index 0000000..d3d2f04
--- /dev/null
+++ b/docs/CAPELLAConfig.md
@@ -0,0 +1,33 @@
+# CAPELLAConfig
+
+Configuration for Couchbase Capella connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**bucket** | **str** | Bucket Name. Example: Enter bucket name |
+**scope** | **str** | Scope Name. Example: Enter scope name |
+**collection** | **str** | Collection Name. Example: Enter collection name |
+**index** | **str** | Search Index Name. Example: Enter search index name |
+
+## Example
+
+```python
+from vectorize_client.models.capella_config import CAPELLAConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CAPELLAConfig from a JSON string
+capella_config_instance = CAPELLAConfig.from_json(json)
+# print the JSON string representation of the object
+print(CAPELLAConfig.to_json())
+
+# convert the object into a dict
+capella_config_dict = capella_config_instance.to_dict()
+# create an instance of CAPELLAConfig from a dict
+capella_config_from_dict = CAPELLAConfig.from_dict(capella_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CONFLUENCEAuthConfig.md b/docs/CONFLUENCEAuthConfig.md
new file mode 100644
index 0000000..85d29f1
--- /dev/null
+++ b/docs/CONFLUENCEAuthConfig.md
@@ -0,0 +1,33 @@
+# CONFLUENCEAuthConfig
+
+Authentication configuration for Confluence
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**username** | **str** | Username. Example: Enter your Confluence username |
+**api_token** | **str** | API Token. Example: Enter your Confluence API token |
+**domain** | **str** | Domain. Example: Enter your Confluence domain (e.g. my-domain.atlassian.net or confluence.<my-company>.com) |
+
+## Example
+
+```python
+from vectorize_client.models.confluence_auth_config import CONFLUENCEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CONFLUENCEAuthConfig from a JSON string
+confluence_auth_config_instance = CONFLUENCEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(CONFLUENCEAuthConfig.to_json())
+
+# convert the object into a dict
+confluence_auth_config_dict = confluence_auth_config_instance.to_dict()
+# create an instance of CONFLUENCEAuthConfig from a dict
+confluence_auth_config_from_dict = CONFLUENCEAuthConfig.from_dict(confluence_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CONFLUENCEConfig.md b/docs/CONFLUENCEConfig.md
new file mode 100644
index 0000000..22f7f97
--- /dev/null
+++ b/docs/CONFLUENCEConfig.md
@@ -0,0 +1,31 @@
+# CONFLUENCEConfig
+
+Configuration for Confluence connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**spaces** | **str** | Spaces. Example: Spaces to include (name, key or id) |
+**root_parents** | **str** | Root Parents. Example: Enter root parent pages | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.confluence_config import CONFLUENCEConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CONFLUENCEConfig from a JSON string
+confluence_config_instance = CONFLUENCEConfig.from_json(json)
+# print the JSON string representation of the object
+print(CONFLUENCEConfig.to_json())
+
+# convert the object into a dict
+confluence_config_dict = confluence_config_instance.to_dict()
+# create an instance of CONFLUENCEConfig from a dict
+confluence_config_from_dict = CONFLUENCEConfig.from_dict(confluence_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Capella.md b/docs/Capella.md
new file mode 100644
index 0000000..c038eb7
--- /dev/null
+++ b/docs/Capella.md
@@ -0,0 +1,31 @@
+# Capella
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"CAPELLA\") |
+**config** | [**CAPELLAConfig**](CAPELLAConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.capella import Capella
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Capella from a JSON string
+capella_instance = Capella.from_json(json)
+# print the JSON string representation of the object
+print(Capella.to_json())
+
+# convert the object into a dict
+capella_dict = capella_instance.to_dict()
+# create an instance of Capella from a dict
+capella_from_dict = Capella.from_dict(capella_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Capella1.md b/docs/Capella1.md
new file mode 100644
index 0000000..32ca7ae
--- /dev/null
+++ b/docs/Capella1.md
@@ -0,0 +1,29 @@
+# Capella1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**CAPELLAConfig**](CAPELLAConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.capella1 import Capella1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Capella1 from a JSON string
+capella1_instance = Capella1.from_json(json)
+# print the JSON string representation of the object
+print(Capella1.to_json())
+
+# convert the object into a dict
+capella1_dict = capella1_instance.to_dict()
+# create an instance of Capella1 from a dict
+capella1_from_dict = Capella1.from_dict(capella1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Capella2.md b/docs/Capella2.md
new file mode 100644
index 0000000..b06c81d
--- /dev/null
+++ b/docs/Capella2.md
@@ -0,0 +1,30 @@
+# Capella2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"CAPELLA\") |
+
+## Example
+
+```python
+from vectorize_client.models.capella2 import Capella2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Capella2 from a JSON string
+capella2_instance = Capella2.from_json(json)
+# print the JSON string representation of the object
+print(Capella2.to_json())
+
+# convert the object into a dict
+capella2_dict = capella2_instance.to_dict()
+# create an instance of Capella2 from a dict
+capella2_from_dict = Capella2.from_dict(capella2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Confluence.md b/docs/Confluence.md
new file mode 100644
index 0000000..f202da5
--- /dev/null
+++ b/docs/Confluence.md
@@ -0,0 +1,31 @@
+# Confluence
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"CONFLUENCE\") |
+**config** | [**CONFLUENCEConfig**](CONFLUENCEConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.confluence import Confluence
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Confluence from a JSON string
+confluence_instance = Confluence.from_json(json)
+# print the JSON string representation of the object
+print(Confluence.to_json())
+
+# convert the object into a dict
+confluence_dict = confluence_instance.to_dict()
+# create an instance of Confluence from a dict
+confluence_from_dict = Confluence.from_dict(confluence_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Confluence1.md b/docs/Confluence1.md
new file mode 100644
index 0000000..43cce9f
--- /dev/null
+++ b/docs/Confluence1.md
@@ -0,0 +1,29 @@
+# Confluence1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**CONFLUENCEConfig**](CONFLUENCEConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.confluence1 import Confluence1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Confluence1 from a JSON string
+confluence1_instance = Confluence1.from_json(json)
+# print the JSON string representation of the object
+print(Confluence1.to_json())
+
+# convert the object into a dict
+confluence1_dict = confluence1_instance.to_dict()
+# create an instance of Confluence1 from a dict
+confluence1_from_dict = Confluence1.from_dict(confluence1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Confluence2.md b/docs/Confluence2.md
new file mode 100644
index 0000000..66afc8f
--- /dev/null
+++ b/docs/Confluence2.md
@@ -0,0 +1,30 @@
+# Confluence2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"CONFLUENCE\") |
+
+## Example
+
+```python
+from vectorize_client.models.confluence2 import Confluence2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Confluence2 from a JSON string
+confluence2_instance = Confluence2.from_json(json)
+# print the JSON string representation of the object
+print(Confluence2.to_json())
+
+# convert the object into a dict
+confluence2_dict = confluence2_instance.to_dict()
+# create an instance of Confluence2 from a dict
+confluence2_from_dict = Confluence2.from_dict(confluence2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ConnectorsAIPlatformsApi.md b/docs/ConnectorsAIPlatformsApi.md
new file mode 100644
index 0000000..437ef7a
--- /dev/null
+++ b/docs/ConnectorsAIPlatformsApi.md
@@ -0,0 +1,432 @@
+# vectorize_client.ConnectorsAIPlatformsApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**create_ai_platform_connector**](ConnectorsAIPlatformsApi.md#create_ai_platform_connector) | **POST** /org/{organizationId}/connectors/aiplatforms | Create a new AI platform connector
+[**delete_ai_platform**](ConnectorsAIPlatformsApi.md#delete_ai_platform) | **DELETE** /org/{organizationId}/connectors/aiplatforms/{aiplatformId} | Delete an AI platform connector
+[**get_ai_platform_connector**](ConnectorsAIPlatformsApi.md#get_ai_platform_connector) | **GET** /org/{organizationId}/connectors/aiplatforms/{aiplatformId} | Get an AI platform connector
+[**get_ai_platform_connectors**](ConnectorsAIPlatformsApi.md#get_ai_platform_connectors) | **GET** /org/{organizationId}/connectors/aiplatforms | Get all existing AI Platform connectors
+[**update_ai_platform_connector**](ConnectorsAIPlatformsApi.md#update_ai_platform_connector) | **PATCH** /org/{organizationId}/connectors/aiplatforms/{aiplatformId} | Update an AI Platform connector
+
+
+# **create_ai_platform_connector**
+> CreateAIPlatformConnectorResponse create_ai_platform_connector(organization_id, create_ai_platform_connector_request_inner)
+
+Create a new AI platform connector
+
+Creates a new AI platform connector for embeddings and processing. The specific configuration fields required depend on the platform type selected.
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.create_ai_platform_connector_request_inner import CreateAIPlatformConnectorRequestInner
+from vectorize_client.models.create_ai_platform_connector_response import CreateAIPlatformConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsAIPlatformsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ create_ai_platform_connector_request_inner = [{"name":"My CreateAIPlatformConnectorRequest","type":"BEDROCK","config":{"name":"My BEDROCKAuthConfig","access-key":"AKIAIOSFODNN7EXAMPLE","key":"key_example_123456","region":"us-east-1"}}] # List[CreateAIPlatformConnectorRequestInner] |
+
+ try:
+ # Create a new AI platform connector
+ api_response = api_instance.create_ai_platform_connector(organization_id, create_ai_platform_connector_request_inner)
+ print("The response of ConnectorsAIPlatformsApi->create_ai_platform_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsAIPlatformsApi->create_ai_platform_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **create_ai_platform_connector_request_inner** | [**List[CreateAIPlatformConnectorRequestInner]**](CreateAIPlatformConnectorRequestInner.md)| |
+
+### Return type
+
+[**CreateAIPlatformConnectorResponse**](CreateAIPlatformConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Connector successfully created | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **delete_ai_platform**
+> DeleteAIPlatformConnectorResponse delete_ai_platform(organization, aiplatform_id)
+
+Delete an AI platform connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.delete_ai_platform_connector_response import DeleteAIPlatformConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsAIPlatformsApi(api_client)
+ organization = 'organization_example' # str |
+ aiplatform_id = 'aiplatform_id_example' # str |
+
+ try:
+ # Delete an AI platform connector
+ api_response = api_instance.delete_ai_platform(organization, aiplatform_id)
+ print("The response of ConnectorsAIPlatformsApi->delete_ai_platform:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsAIPlatformsApi->delete_ai_platform: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **aiplatform_id** | **str**| |
+
+### Return type
+
+[**DeleteAIPlatformConnectorResponse**](DeleteAIPlatformConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | AI Platform connector successfully deleted | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_ai_platform_connector**
+> AIPlatform get_ai_platform_connector(organization, aiplatform_id)
+
+Get an AI platform connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.ai_platform import AIPlatform
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsAIPlatformsApi(api_client)
+ organization = 'organization_example' # str |
+ aiplatform_id = 'aiplatform_id_example' # str |
+
+ try:
+ # Get an AI platform connector
+ api_response = api_instance.get_ai_platform_connector(organization, aiplatform_id)
+ print("The response of ConnectorsAIPlatformsApi->get_ai_platform_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsAIPlatformsApi->get_ai_platform_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **aiplatform_id** | **str**| |
+
+### Return type
+
+[**AIPlatform**](AIPlatform.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get an AI platform connector | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_ai_platform_connectors**
+> GetAIPlatformConnectors200Response get_ai_platform_connectors(organization_id)
+
+Get all existing AI Platform connectors
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_ai_platform_connectors200_response import GetAIPlatformConnectors200Response
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsAIPlatformsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+
+ try:
+ # Get all existing AI Platform connectors
+ api_response = api_instance.get_ai_platform_connectors(organization_id)
+ print("The response of ConnectorsAIPlatformsApi->get_ai_platform_connectors:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsAIPlatformsApi->get_ai_platform_connectors: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+
+### Return type
+
+[**GetAIPlatformConnectors200Response**](GetAIPlatformConnectors200Response.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get all existing AI Platform connectors | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **update_ai_platform_connector**
+> UpdateAIPlatformConnectorResponse update_ai_platform_connector(organization, aiplatform_id, update_aiplatform_connector_request)
+
+Update an AI Platform connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.update_ai_platform_connector_response import UpdateAIPlatformConnectorResponse
+from vectorize_client.models.update_aiplatform_connector_request import UpdateAiplatformConnectorRequest
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsAIPlatformsApi(api_client)
+ organization = 'organization_example' # str |
+ aiplatform_id = 'aiplatform_id_example' # str |
+ update_aiplatform_connector_request = vectorize_client.UpdateAiplatformConnectorRequest() # UpdateAiplatformConnectorRequest |
+
+ try:
+ # Update an AI Platform connector
+ api_response = api_instance.update_ai_platform_connector(organization, aiplatform_id, update_aiplatform_connector_request)
+ print("The response of ConnectorsAIPlatformsApi->update_ai_platform_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsAIPlatformsApi->update_ai_platform_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **aiplatform_id** | **str**| |
+ **update_aiplatform_connector_request** | [**UpdateAiplatformConnectorRequest**](UpdateAiplatformConnectorRequest.md)| |
+
+### Return type
+
+[**UpdateAIPlatformConnectorResponse**](UpdateAIPlatformConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | AI Platform connector successfully updated | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/ConnectorsDestinationConnectorsApi.md b/docs/ConnectorsDestinationConnectorsApi.md
new file mode 100644
index 0000000..da227fa
--- /dev/null
+++ b/docs/ConnectorsDestinationConnectorsApi.md
@@ -0,0 +1,432 @@
+# vectorize_client.ConnectorsDestinationConnectorsApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**create_destination_connector**](ConnectorsDestinationConnectorsApi.md#create_destination_connector) | **POST** /org/{organizationId}/connectors/destinations | Create a new destination connector
+[**delete_destination_connector**](ConnectorsDestinationConnectorsApi.md#delete_destination_connector) | **DELETE** /org/{organizationId}/connectors/destinations/{destinationConnectorId} | Delete a destination connector
+[**get_destination_connector**](ConnectorsDestinationConnectorsApi.md#get_destination_connector) | **GET** /org/{organizationId}/connectors/destinations/{destinationConnectorId} | Get a destination connector
+[**get_destination_connectors**](ConnectorsDestinationConnectorsApi.md#get_destination_connectors) | **GET** /org/{organizationId}/connectors/destinations | Get all existing destination connectors
+[**update_destination_connector**](ConnectorsDestinationConnectorsApi.md#update_destination_connector) | **PATCH** /org/{organizationId}/connectors/destinations/{destinationConnectorId} | Update a destination connector
+
+
+# **create_destination_connector**
+> CreateDestinationConnectorResponse create_destination_connector(organization_id, create_destination_connector_request_inner)
+
+Create a new destination connector
+
+Creates a new destination connector for data storage. The specific configuration fields required depend on the connector type selected.
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.create_destination_connector_request_inner import CreateDestinationConnectorRequestInner
+from vectorize_client.models.create_destination_connector_response import CreateDestinationConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsDestinationConnectorsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ create_destination_connector_request_inner = [{"name":"My CreateDestinationConnectorRequest","type":"CAPELLA","config":{"bucket":"example-bucket","scope":"example-scope","collection":"example-collection","index":"example-index"}}] # List[CreateDestinationConnectorRequestInner] |
+
+ try:
+ # Create a new destination connector
+ api_response = api_instance.create_destination_connector(organization_id, create_destination_connector_request_inner)
+ print("The response of ConnectorsDestinationConnectorsApi->create_destination_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsDestinationConnectorsApi->create_destination_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **create_destination_connector_request_inner** | [**List[CreateDestinationConnectorRequestInner]**](CreateDestinationConnectorRequestInner.md)| |
+
+### Return type
+
+[**CreateDestinationConnectorResponse**](CreateDestinationConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Connector successfully created | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **delete_destination_connector**
+> DeleteDestinationConnectorResponse delete_destination_connector(organization, destination_connector_id)
+
+Delete a destination connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.delete_destination_connector_response import DeleteDestinationConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsDestinationConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ destination_connector_id = 'destination_connector_id_example' # str |
+
+ try:
+ # Delete a destination connector
+ api_response = api_instance.delete_destination_connector(organization, destination_connector_id)
+ print("The response of ConnectorsDestinationConnectorsApi->delete_destination_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsDestinationConnectorsApi->delete_destination_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **destination_connector_id** | **str**| |
+
+### Return type
+
+[**DeleteDestinationConnectorResponse**](DeleteDestinationConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Destination connector successfully deleted | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_destination_connector**
+> DestinationConnector get_destination_connector(organization, destination_connector_id)
+
+Get a destination connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.destination_connector import DestinationConnector
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsDestinationConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ destination_connector_id = 'destination_connector_id_example' # str |
+
+ try:
+ # Get a destination connector
+ api_response = api_instance.get_destination_connector(organization, destination_connector_id)
+ print("The response of ConnectorsDestinationConnectorsApi->get_destination_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsDestinationConnectorsApi->get_destination_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **destination_connector_id** | **str**| |
+
+### Return type
+
+[**DestinationConnector**](DestinationConnector.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get a destination connector | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_destination_connectors**
+> GetDestinationConnectors200Response get_destination_connectors(organization_id)
+
+Get all existing destination connectors
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_destination_connectors200_response import GetDestinationConnectors200Response
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsDestinationConnectorsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+
+ try:
+ # Get all existing destination connectors
+ api_response = api_instance.get_destination_connectors(organization_id)
+ print("The response of ConnectorsDestinationConnectorsApi->get_destination_connectors:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsDestinationConnectorsApi->get_destination_connectors: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+
+### Return type
+
+[**GetDestinationConnectors200Response**](GetDestinationConnectors200Response.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get all destination connectors | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **update_destination_connector**
+> UpdateDestinationConnectorResponse update_destination_connector(organization, destination_connector_id, update_destination_connector_request)
+
+Update a destination connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.update_destination_connector_request import UpdateDestinationConnectorRequest
+from vectorize_client.models.update_destination_connector_response import UpdateDestinationConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsDestinationConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ destination_connector_id = 'destination_connector_id_example' # str |
+ update_destination_connector_request = vectorize_client.UpdateDestinationConnectorRequest() # UpdateDestinationConnectorRequest |
+
+ try:
+ # Update a destination connector
+ api_response = api_instance.update_destination_connector(organization, destination_connector_id, update_destination_connector_request)
+ print("The response of ConnectorsDestinationConnectorsApi->update_destination_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsDestinationConnectorsApi->update_destination_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **destination_connector_id** | **str**| |
+ **update_destination_connector_request** | [**UpdateDestinationConnectorRequest**](UpdateDestinationConnectorRequest.md)| |
+
+### Return type
+
+[**UpdateDestinationConnectorResponse**](UpdateDestinationConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Destination connector successfully updated | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/ConnectorsSourceConnectorsApi.md b/docs/ConnectorsSourceConnectorsApi.md
new file mode 100644
index 0000000..2358517
--- /dev/null
+++ b/docs/ConnectorsSourceConnectorsApi.md
@@ -0,0 +1,693 @@
+# vectorize_client.ConnectorsSourceConnectorsApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**add_user_to_source_connector**](ConnectorsSourceConnectorsApi.md#add_user_to_source_connector) | **POST** /org/{organizationId}/connectors/sources/{sourceConnectorId}/users | Add a user to a source connector
+[**create_source_connector**](ConnectorsSourceConnectorsApi.md#create_source_connector) | **POST** /org/{organizationId}/connectors/sources | Create a new source connector
+[**delete_source_connector**](ConnectorsSourceConnectorsApi.md#delete_source_connector) | **DELETE** /org/{organizationId}/connectors/sources/{sourceConnectorId} | Delete a source connector
+[**delete_user_from_source_connector**](ConnectorsSourceConnectorsApi.md#delete_user_from_source_connector) | **DELETE** /org/{organizationId}/connectors/sources/{sourceConnectorId}/users | Delete a source connector user
+[**get_source_connector**](ConnectorsSourceConnectorsApi.md#get_source_connector) | **GET** /org/{organizationId}/connectors/sources/{sourceConnectorId} | Get a source connector
+[**get_source_connectors**](ConnectorsSourceConnectorsApi.md#get_source_connectors) | **GET** /org/{organizationId}/connectors/sources | Get all existing source connectors
+[**update_source_connector**](ConnectorsSourceConnectorsApi.md#update_source_connector) | **PATCH** /org/{organizationId}/connectors/sources/{sourceConnectorId} | Update a source connector
+[**update_user_in_source_connector**](ConnectorsSourceConnectorsApi.md#update_user_in_source_connector) | **PATCH** /org/{organizationId}/connectors/sources/{sourceConnectorId}/users | Update a source connector user
+
+
+# **add_user_to_source_connector**
+> AddUserFromSourceConnectorResponse add_user_to_source_connector(organization, source_connector_id, add_user_to_source_connector_request)
+
+Add a user to a source connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.add_user_from_source_connector_response import AddUserFromSourceConnectorResponse
+from vectorize_client.models.add_user_to_source_connector_request import AddUserToSourceConnectorRequest
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ source_connector_id = 'source_connector_id_example' # str |
+ add_user_to_source_connector_request = {"userId":"29cc613c-dcb8-429e-88fe-be19dbd8b312","selectedFiles":{},"refreshToken":"refresh_token_example_123456","accessToken":"access_token_example_123456"} # AddUserToSourceConnectorRequest |
+
+ try:
+ # Add a user to a source connector
+ api_response = api_instance.add_user_to_source_connector(organization, source_connector_id, add_user_to_source_connector_request)
+ print("The response of ConnectorsSourceConnectorsApi->add_user_to_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->add_user_to_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **source_connector_id** | **str**| |
+ **add_user_to_source_connector_request** | [**AddUserToSourceConnectorRequest**](AddUserToSourceConnectorRequest.md)| |
+
+### Return type
+
+[**AddUserFromSourceConnectorResponse**](AddUserFromSourceConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | User successfully added to the source connector | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **create_source_connector**
+> CreateSourceConnectorResponse create_source_connector(organization_id, create_source_connector_request_inner)
+
+Create a new source connector
+
+Creates a new source connector for data ingestion. The specific configuration fields required depend on the connector type selected.
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.create_source_connector_request_inner import CreateSourceConnectorRequestInner
+from vectorize_client.models.create_source_connector_response import CreateSourceConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ create_source_connector_request_inner = [{"name":"My CreateSourceConnectorRequest","type":"AWS_S3","config":{"file-extensions":"pdf","idle-time":300,"recursive":true,"path-prefix":"/example/path","path-metadata-regex":"/example/path","path-regex-group-names":"/example/path"}}] # List[CreateSourceConnectorRequestInner] |
+
+ try:
+ # Create a new source connector
+ api_response = api_instance.create_source_connector(organization_id, create_source_connector_request_inner)
+ print("The response of ConnectorsSourceConnectorsApi->create_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->create_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **create_source_connector_request_inner** | [**List[CreateSourceConnectorRequestInner]**](CreateSourceConnectorRequestInner.md)| |
+
+### Return type
+
+[**CreateSourceConnectorResponse**](CreateSourceConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Connector successfully created | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **delete_source_connector**
+> DeleteSourceConnectorResponse delete_source_connector(organization, source_connector_id)
+
+Delete a source connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.delete_source_connector_response import DeleteSourceConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ source_connector_id = 'source_connector_id_example' # str |
+
+ try:
+ # Delete a source connector
+ api_response = api_instance.delete_source_connector(organization, source_connector_id)
+ print("The response of ConnectorsSourceConnectorsApi->delete_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->delete_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **source_connector_id** | **str**| |
+
+### Return type
+
+[**DeleteSourceConnectorResponse**](DeleteSourceConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Source connector successfully deleted | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **delete_user_from_source_connector**
+> RemoveUserFromSourceConnectorResponse delete_user_from_source_connector(organization, source_connector_id, remove_user_from_source_connector_request)
+
+Delete a source connector user
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.remove_user_from_source_connector_request import RemoveUserFromSourceConnectorRequest
+from vectorize_client.models.remove_user_from_source_connector_response import RemoveUserFromSourceConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ source_connector_id = 'source_connector_id_example' # str |
+ remove_user_from_source_connector_request = {"userId":"a3703b11-2eba-45e3-87cd-7e5e7c076e3a"} # RemoveUserFromSourceConnectorRequest |
+
+ try:
+ # Delete a source connector user
+ api_response = api_instance.delete_user_from_source_connector(organization, source_connector_id, remove_user_from_source_connector_request)
+ print("The response of ConnectorsSourceConnectorsApi->delete_user_from_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->delete_user_from_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **source_connector_id** | **str**| |
+ **remove_user_from_source_connector_request** | [**RemoveUserFromSourceConnectorRequest**](RemoveUserFromSourceConnectorRequest.md)| |
+
+### Return type
+
+[**RemoveUserFromSourceConnectorResponse**](RemoveUserFromSourceConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | User successfully removed from the source connector | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_source_connector**
+> SourceConnector get_source_connector(organization, source_connector_id)
+
+Get a source connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.source_connector import SourceConnector
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ source_connector_id = 'source_connector_id_example' # str |
+
+ try:
+ # Get a source connector
+ api_response = api_instance.get_source_connector(organization, source_connector_id)
+ print("The response of ConnectorsSourceConnectorsApi->get_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->get_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **source_connector_id** | **str**| |
+
+### Return type
+
+[**SourceConnector**](SourceConnector.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get a source connector | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_source_connectors**
+> GetSourceConnectors200Response get_source_connectors(organization_id)
+
+Get all existing source connectors
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_source_connectors200_response import GetSourceConnectors200Response
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization_id = 'organization_id_example' # str |
+
+ try:
+ # Get all existing source connectors
+ api_response = api_instance.get_source_connectors(organization_id)
+ print("The response of ConnectorsSourceConnectorsApi->get_source_connectors:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->get_source_connectors: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+
+### Return type
+
+[**GetSourceConnectors200Response**](GetSourceConnectors200Response.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get all source connectors | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **update_source_connector**
+> UpdateSourceConnectorResponse update_source_connector(organization, source_connector_id, update_source_connector_request)
+
+Update a source connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.update_source_connector_request import UpdateSourceConnectorRequest
+from vectorize_client.models.update_source_connector_response import UpdateSourceConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ source_connector_id = 'source_connector_id_example' # str |
+ update_source_connector_request = vectorize_client.UpdateSourceConnectorRequest() # UpdateSourceConnectorRequest |
+
+ try:
+ # Update a source connector
+ api_response = api_instance.update_source_connector(organization, source_connector_id, update_source_connector_request)
+ print("The response of ConnectorsSourceConnectorsApi->update_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->update_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **source_connector_id** | **str**| |
+ **update_source_connector_request** | [**UpdateSourceConnectorRequest**](UpdateSourceConnectorRequest.md)| |
+
+### Return type
+
+[**UpdateSourceConnectorResponse**](UpdateSourceConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Source connector successfully updated | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **update_user_in_source_connector**
+> UpdateUserInSourceConnectorResponse update_user_in_source_connector(organization, source_connector_id, update_user_in_source_connector_request)
+
+Update a source connector user
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.update_user_in_source_connector_request import UpdateUserInSourceConnectorRequest
+from vectorize_client.models.update_user_in_source_connector_response import UpdateUserInSourceConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ConnectorsSourceConnectorsApi(api_client)
+ organization = 'organization_example' # str |
+ source_connector_id = 'source_connector_id_example' # str |
+ update_user_in_source_connector_request = {"userId":"1dda2405-5b9d-403a-bdf7-01a78cb796da","selectedFiles":{},"refreshToken":"refresh_token_example_123456","accessToken":"access_token_example_123456"} # UpdateUserInSourceConnectorRequest |
+
+ try:
+ # Update a source connector user
+ api_response = api_instance.update_user_in_source_connector(organization, source_connector_id, update_user_in_source_connector_request)
+ print("The response of ConnectorsSourceConnectorsApi->update_user_in_source_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ConnectorsSourceConnectorsApi->update_user_in_source_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **source_connector_id** | **str**| |
+ **update_user_in_source_connector_request** | [**UpdateUserInSourceConnectorRequest**](UpdateUserInSourceConnectorRequest.md)| |
+
+### Return type
+
+[**UpdateUserInSourceConnectorResponse**](UpdateUserInSourceConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | User successfully updated in the source connector | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/CreateAIPlatformConnector.md b/docs/CreateAIPlatformConnector.md
new file mode 100644
index 0000000..aa2b666
--- /dev/null
+++ b/docs/CreateAIPlatformConnector.md
@@ -0,0 +1,31 @@
+# CreateAIPlatformConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**type** | [**AIPlatformType**](AIPlatformType.md) | |
+**config** | **Dict[str, Optional[object]]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.create_ai_platform_connector import CreateAIPlatformConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateAIPlatformConnector from a JSON string
+create_ai_platform_connector_instance = CreateAIPlatformConnector.from_json(json)
+# print the JSON string representation of the object
+print(CreateAIPlatformConnector.to_json())
+
+# convert the object into a dict
+create_ai_platform_connector_dict = create_ai_platform_connector_instance.to_dict()
+# create an instance of CreateAIPlatformConnector from a dict
+create_ai_platform_connector_from_dict = CreateAIPlatformConnector.from_dict(create_ai_platform_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateAIPlatformConnectorRequestInner.md b/docs/CreateAIPlatformConnectorRequestInner.md
new file mode 100644
index 0000000..d10a800
--- /dev/null
+++ b/docs/CreateAIPlatformConnectorRequestInner.md
@@ -0,0 +1,31 @@
+# CreateAIPlatformConnectorRequestInner
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"BEDROCK\") |
+**config** | [**VOYAGEAuthConfig**](VOYAGEAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_ai_platform_connector_request_inner import CreateAIPlatformConnectorRequestInner
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateAIPlatformConnectorRequestInner from a JSON string
+create_ai_platform_connector_request_inner_instance = CreateAIPlatformConnectorRequestInner.from_json(json)
+# print the JSON string representation of the object
+print(CreateAIPlatformConnectorRequestInner.to_json())
+
+# convert the object into a dict
+create_ai_platform_connector_request_inner_dict = create_ai_platform_connector_request_inner_instance.to_dict()
+# create an instance of CreateAIPlatformConnectorRequestInner from a dict
+create_ai_platform_connector_request_inner_from_dict = CreateAIPlatformConnectorRequestInner.from_dict(create_ai_platform_connector_request_inner_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateAIPlatformConnectorResponse.md b/docs/CreateAIPlatformConnectorResponse.md
new file mode 100644
index 0000000..7727f5e
--- /dev/null
+++ b/docs/CreateAIPlatformConnectorResponse.md
@@ -0,0 +1,30 @@
+# CreateAIPlatformConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**connectors** | [**List[CreatedAIPlatformConnector]**](CreatedAIPlatformConnector.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_ai_platform_connector_response import CreateAIPlatformConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateAIPlatformConnectorResponse from a JSON string
+create_ai_platform_connector_response_instance = CreateAIPlatformConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(CreateAIPlatformConnectorResponse.to_json())
+
+# convert the object into a dict
+create_ai_platform_connector_response_dict = create_ai_platform_connector_response_instance.to_dict()
+# create an instance of CreateAIPlatformConnectorResponse from a dict
+create_ai_platform_connector_response_from_dict = CreateAIPlatformConnectorResponse.from_dict(create_ai_platform_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateDestinationConnector.md b/docs/CreateDestinationConnector.md
new file mode 100644
index 0000000..0cf3211
--- /dev/null
+++ b/docs/CreateDestinationConnector.md
@@ -0,0 +1,31 @@
+# CreateDestinationConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**type** | [**DestinationConnectorType**](DestinationConnectorType.md) | |
+**config** | **Dict[str, Optional[object]]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.create_destination_connector import CreateDestinationConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateDestinationConnector from a JSON string
+create_destination_connector_instance = CreateDestinationConnector.from_json(json)
+# print the JSON string representation of the object
+print(CreateDestinationConnector.to_json())
+
+# convert the object into a dict
+create_destination_connector_dict = create_destination_connector_instance.to_dict()
+# create an instance of CreateDestinationConnector from a dict
+create_destination_connector_from_dict = CreateDestinationConnector.from_dict(create_destination_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateDestinationConnectorRequestInner.md b/docs/CreateDestinationConnectorRequestInner.md
new file mode 100644
index 0000000..86590cb
--- /dev/null
+++ b/docs/CreateDestinationConnectorRequestInner.md
@@ -0,0 +1,31 @@
+# CreateDestinationConnectorRequestInner
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"CAPELLA\") |
+**config** | [**TURBOPUFFERConfig**](TURBOPUFFERConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_destination_connector_request_inner import CreateDestinationConnectorRequestInner
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateDestinationConnectorRequestInner from a JSON string
+create_destination_connector_request_inner_instance = CreateDestinationConnectorRequestInner.from_json(json)
+# print the JSON string representation of the object
+print(CreateDestinationConnectorRequestInner.to_json())
+
+# convert the object into a dict
+create_destination_connector_request_inner_dict = create_destination_connector_request_inner_instance.to_dict()
+# create an instance of CreateDestinationConnectorRequestInner from a dict
+create_destination_connector_request_inner_from_dict = CreateDestinationConnectorRequestInner.from_dict(create_destination_connector_request_inner_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateDestinationConnectorResponse.md b/docs/CreateDestinationConnectorResponse.md
new file mode 100644
index 0000000..67140c9
--- /dev/null
+++ b/docs/CreateDestinationConnectorResponse.md
@@ -0,0 +1,30 @@
+# CreateDestinationConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**connectors** | [**List[CreatedDestinationConnector]**](CreatedDestinationConnector.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_destination_connector_response import CreateDestinationConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateDestinationConnectorResponse from a JSON string
+create_destination_connector_response_instance = CreateDestinationConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(CreateDestinationConnectorResponse.to_json())
+
+# convert the object into a dict
+create_destination_connector_response_dict = create_destination_connector_response_instance.to_dict()
+# create an instance of CreateDestinationConnectorResponse from a dict
+create_destination_connector_response_from_dict = CreateDestinationConnectorResponse.from_dict(create_destination_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreatePipelineResponse.md b/docs/CreatePipelineResponse.md
new file mode 100644
index 0000000..752d37f
--- /dev/null
+++ b/docs/CreatePipelineResponse.md
@@ -0,0 +1,30 @@
+# CreatePipelineResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**CreatePipelineResponseData**](CreatePipelineResponseData.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_pipeline_response import CreatePipelineResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreatePipelineResponse from a JSON string
+create_pipeline_response_instance = CreatePipelineResponse.from_json(json)
+# print the JSON string representation of the object
+print(CreatePipelineResponse.to_json())
+
+# convert the object into a dict
+create_pipeline_response_dict = create_pipeline_response_instance.to_dict()
+# create an instance of CreatePipelineResponse from a dict
+create_pipeline_response_from_dict = CreatePipelineResponse.from_dict(create_pipeline_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreatePipelineResponseData.md b/docs/CreatePipelineResponseData.md
new file mode 100644
index 0000000..7e86a7a
--- /dev/null
+++ b/docs/CreatePipelineResponseData.md
@@ -0,0 +1,29 @@
+# CreatePipelineResponseData
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.create_pipeline_response_data import CreatePipelineResponseData
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreatePipelineResponseData from a JSON string
+create_pipeline_response_data_instance = CreatePipelineResponseData.from_json(json)
+# print the JSON string representation of the object
+print(CreatePipelineResponseData.to_json())
+
+# convert the object into a dict
+create_pipeline_response_data_dict = create_pipeline_response_data_instance.to_dict()
+# create an instance of CreatePipelineResponseData from a dict
+create_pipeline_response_data_from_dict = CreatePipelineResponseData.from_dict(create_pipeline_response_data_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateSourceConnector.md b/docs/CreateSourceConnector.md
new file mode 100644
index 0000000..c8674f1
--- /dev/null
+++ b/docs/CreateSourceConnector.md
@@ -0,0 +1,31 @@
+# CreateSourceConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**type** | [**SourceConnectorType**](SourceConnectorType.md) | |
+**config** | **Dict[str, Optional[object]]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.create_source_connector import CreateSourceConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateSourceConnector from a JSON string
+create_source_connector_instance = CreateSourceConnector.from_json(json)
+# print the JSON string representation of the object
+print(CreateSourceConnector.to_json())
+
+# convert the object into a dict
+create_source_connector_dict = create_source_connector_instance.to_dict()
+# create an instance of CreateSourceConnector from a dict
+create_source_connector_from_dict = CreateSourceConnector.from_dict(create_source_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateSourceConnectorRequestInner.md b/docs/CreateSourceConnectorRequestInner.md
new file mode 100644
index 0000000..dcea7d4
--- /dev/null
+++ b/docs/CreateSourceConnectorRequestInner.md
@@ -0,0 +1,31 @@
+# CreateSourceConnectorRequestInner
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"AWS_S3\") |
+**config** | [**FIREFLIESConfig**](FIREFLIESConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_source_connector_request_inner import CreateSourceConnectorRequestInner
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateSourceConnectorRequestInner from a JSON string
+create_source_connector_request_inner_instance = CreateSourceConnectorRequestInner.from_json(json)
+# print the JSON string representation of the object
+print(CreateSourceConnectorRequestInner.to_json())
+
+# convert the object into a dict
+create_source_connector_request_inner_dict = create_source_connector_request_inner_instance.to_dict()
+# create an instance of CreateSourceConnectorRequestInner from a dict
+create_source_connector_request_inner_from_dict = CreateSourceConnectorRequestInner.from_dict(create_source_connector_request_inner_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreateSourceConnectorResponse.md b/docs/CreateSourceConnectorResponse.md
new file mode 100644
index 0000000..7124435
--- /dev/null
+++ b/docs/CreateSourceConnectorResponse.md
@@ -0,0 +1,30 @@
+# CreateSourceConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**connectors** | [**List[CreatedSourceConnector]**](CreatedSourceConnector.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.create_source_connector_response import CreateSourceConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreateSourceConnectorResponse from a JSON string
+create_source_connector_response_instance = CreateSourceConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(CreateSourceConnectorResponse.to_json())
+
+# convert the object into a dict
+create_source_connector_response_dict = create_source_connector_response_instance.to_dict()
+# create an instance of CreateSourceConnectorResponse from a dict
+create_source_connector_response_from_dict = CreateSourceConnectorResponse.from_dict(create_source_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreatedAIPlatformConnector.md b/docs/CreatedAIPlatformConnector.md
new file mode 100644
index 0000000..c6131de
--- /dev/null
+++ b/docs/CreatedAIPlatformConnector.md
@@ -0,0 +1,30 @@
+# CreatedAIPlatformConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.created_ai_platform_connector import CreatedAIPlatformConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreatedAIPlatformConnector from a JSON string
+created_ai_platform_connector_instance = CreatedAIPlatformConnector.from_json(json)
+# print the JSON string representation of the object
+print(CreatedAIPlatformConnector.to_json())
+
+# convert the object into a dict
+created_ai_platform_connector_dict = created_ai_platform_connector_instance.to_dict()
+# create an instance of CreatedAIPlatformConnector from a dict
+created_ai_platform_connector_from_dict = CreatedAIPlatformConnector.from_dict(created_ai_platform_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreatedDestinationConnector.md b/docs/CreatedDestinationConnector.md
new file mode 100644
index 0000000..236a87a
--- /dev/null
+++ b/docs/CreatedDestinationConnector.md
@@ -0,0 +1,30 @@
+# CreatedDestinationConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.created_destination_connector import CreatedDestinationConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreatedDestinationConnector from a JSON string
+created_destination_connector_instance = CreatedDestinationConnector.from_json(json)
+# print the JSON string representation of the object
+print(CreatedDestinationConnector.to_json())
+
+# convert the object into a dict
+created_destination_connector_dict = created_destination_connector_instance.to_dict()
+# create an instance of CreatedDestinationConnector from a dict
+created_destination_connector_from_dict = CreatedDestinationConnector.from_dict(created_destination_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/CreatedSourceConnector.md b/docs/CreatedSourceConnector.md
new file mode 100644
index 0000000..471753b
--- /dev/null
+++ b/docs/CreatedSourceConnector.md
@@ -0,0 +1,30 @@
+# CreatedSourceConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.created_source_connector import CreatedSourceConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of CreatedSourceConnector from a JSON string
+created_source_connector_instance = CreatedSourceConnector.from_json(json)
+# print the JSON string representation of the object
+print(CreatedSourceConnector.to_json())
+
+# convert the object into a dict
+created_source_connector_dict = created_source_connector_instance.to_dict()
+# create an instance of CreatedSourceConnector from a dict
+created_source_connector_from_dict = CreatedSourceConnector.from_dict(created_source_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DATASTAXAuthConfig.md b/docs/DATASTAXAuthConfig.md
new file mode 100644
index 0000000..8dcd8e3
--- /dev/null
+++ b/docs/DATASTAXAuthConfig.md
@@ -0,0 +1,32 @@
+# DATASTAXAuthConfig
+
+Authentication configuration for DataStax Astra
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your DataStax integration |
+**endpoint_secret** | **str** | API Endpoint. Example: Enter your API endpoint |
+**token** | **str** | Application Token. Example: Enter your application token |
+
+## Example
+
+```python
+from vectorize_client.models.datastax_auth_config import DATASTAXAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DATASTAXAuthConfig from a JSON string
+datastax_auth_config_instance = DATASTAXAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(DATASTAXAuthConfig.to_json())
+
+# convert the object into a dict
+datastax_auth_config_dict = datastax_auth_config_instance.to_dict()
+# create an instance of DATASTAXAuthConfig from a dict
+datastax_auth_config_from_dict = DATASTAXAuthConfig.from_dict(datastax_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DATASTAXConfig.md b/docs/DATASTAXConfig.md
new file mode 100644
index 0000000..d6c8a6f
--- /dev/null
+++ b/docs/DATASTAXConfig.md
@@ -0,0 +1,30 @@
+# DATASTAXConfig
+
+Configuration for DataStax Astra connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**collection** | **str** | Collection Name. Example: Enter collection name |
+
+## Example
+
+```python
+from vectorize_client.models.datastax_config import DATASTAXConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DATASTAXConfig from a JSON string
+datastax_config_instance = DATASTAXConfig.from_json(json)
+# print the JSON string representation of the object
+print(DATASTAXConfig.to_json())
+
+# convert the object into a dict
+datastax_config_dict = datastax_config_instance.to_dict()
+# create an instance of DATASTAXConfig from a dict
+datastax_config_from_dict = DATASTAXConfig.from_dict(datastax_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DISCORDAuthConfig.md b/docs/DISCORDAuthConfig.md
new file mode 100644
index 0000000..b41aea0
--- /dev/null
+++ b/docs/DISCORDAuthConfig.md
@@ -0,0 +1,33 @@
+# DISCORDAuthConfig
+
+Authentication configuration for Discord
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**server_id** | **str** | Server ID. Example: Enter Server ID |
+**bot_token** | **str** | Bot token. Example: Enter Token |
+**channel_ids** | **str** | Channel ID. Example: Enter channel ID |
+
+## Example
+
+```python
+from vectorize_client.models.discord_auth_config import DISCORDAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DISCORDAuthConfig from a JSON string
+discord_auth_config_instance = DISCORDAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(DISCORDAuthConfig.to_json())
+
+# convert the object into a dict
+discord_auth_config_dict = discord_auth_config_instance.to_dict()
+# create an instance of DISCORDAuthConfig from a dict
+discord_auth_config_from_dict = DISCORDAuthConfig.from_dict(discord_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DISCORDConfig.md b/docs/DISCORDConfig.md
new file mode 100644
index 0000000..6692eb3
--- /dev/null
+++ b/docs/DISCORDConfig.md
@@ -0,0 +1,36 @@
+# DISCORDConfig
+
+Configuration for Discord connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**emoji** | **str** | Emoji Filter. Example: Enter custom emoji filter name | [optional]
+**author** | **str** | Author Filter. Example: Enter author name | [optional]
+**ignore_author** | **str** | Ignore Author Filter. Example: Enter ignore author name | [optional]
+**limit** | **float** | Limit. Example: Enter limit | [optional] [default to 10000]
+**thread_message_inclusion** | **str** | Thread Message Inclusion | [optional] [default to 'ALL']
+**filter_logic** | **str** | Filter Logic | [optional] [default to 'AND']
+**thread_message_mode** | **str** | Thread Message Mode | [optional] [default to 'CONCATENATE']
+
+## Example
+
+```python
+from vectorize_client.models.discord_config import DISCORDConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DISCORDConfig from a JSON string
+discord_config_instance = DISCORDConfig.from_json(json)
+# print the JSON string representation of the object
+print(DISCORDConfig.to_json())
+
+# convert the object into a dict
+discord_config_dict = discord_config_instance.to_dict()
+# create an instance of DISCORDConfig from a dict
+discord_config_from_dict = DISCORDConfig.from_dict(discord_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DROPBOXAuthConfig.md b/docs/DROPBOXAuthConfig.md
new file mode 100644
index 0000000..cc916d9
--- /dev/null
+++ b/docs/DROPBOXAuthConfig.md
@@ -0,0 +1,31 @@
+# DROPBOXAuthConfig
+
+Authentication configuration for Dropbox (Legacy)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**refresh_token** | **str** | Connect Dropbox to Vectorize. Example: Authorize |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_auth_config import DROPBOXAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DROPBOXAuthConfig from a JSON string
+dropbox_auth_config_instance = DROPBOXAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(DROPBOXAuthConfig.to_json())
+
+# convert the object into a dict
+dropbox_auth_config_dict = dropbox_auth_config_instance.to_dict()
+# create an instance of DROPBOXAuthConfig from a dict
+dropbox_auth_config_from_dict = DROPBOXAuthConfig.from_dict(dropbox_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DROPBOXConfig.md b/docs/DROPBOXConfig.md
new file mode 100644
index 0000000..2994d5a
--- /dev/null
+++ b/docs/DROPBOXConfig.md
@@ -0,0 +1,30 @@
+# DROPBOXConfig
+
+Configuration for Dropbox (Legacy) connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**path_prefix** | **str** | Read from these folders (optional). Example: Enter Path: /exampleFolder/subFolder | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_config import DROPBOXConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DROPBOXConfig from a JSON string
+dropbox_config_instance = DROPBOXConfig.from_json(json)
+# print the JSON string representation of the object
+print(DROPBOXConfig.to_json())
+
+# convert the object into a dict
+dropbox_config_dict = dropbox_config_instance.to_dict()
+# create an instance of DROPBOXConfig from a dict
+dropbox_config_from_dict = DROPBOXConfig.from_dict(dropbox_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DROPBOXOAUTHAuthConfig.md b/docs/DROPBOXOAUTHAuthConfig.md
new file mode 100644
index 0000000..a18c009
--- /dev/null
+++ b/docs/DROPBOXOAUTHAuthConfig.md
@@ -0,0 +1,34 @@
+# DROPBOXOAUTHAuthConfig
+
+Authentication configuration for Dropbox OAuth
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**authorized_user** | **str** | Authorized User | [optional]
+**selection_details** | **str** | Connect Dropbox to Vectorize. Example: Authorize |
+**edited_users** | **str** | | [optional]
+**reconnect_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropboxoauth_auth_config import DROPBOXOAUTHAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DROPBOXOAUTHAuthConfig from a JSON string
+dropboxoauth_auth_config_instance = DROPBOXOAUTHAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(DROPBOXOAUTHAuthConfig.to_json())
+
+# convert the object into a dict
+dropboxoauth_auth_config_dict = dropboxoauth_auth_config_instance.to_dict()
+# create an instance of DROPBOXOAUTHAuthConfig from a dict
+dropboxoauth_auth_config_from_dict = DROPBOXOAUTHAuthConfig.from_dict(dropboxoauth_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DROPBOXOAUTHMULTIAuthConfig.md b/docs/DROPBOXOAUTHMULTIAuthConfig.md
new file mode 100644
index 0000000..caa1dfb
--- /dev/null
+++ b/docs/DROPBOXOAUTHMULTIAuthConfig.md
@@ -0,0 +1,33 @@
+# DROPBOXOAUTHMULTIAuthConfig
+
+Authentication configuration for Dropbox Multi-User (Vectorize)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**authorized_users** | **str** | Authorized Users | [optional]
+**edited_users** | **str** | | [optional]
+**deleted_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropboxoauthmulti_auth_config import DROPBOXOAUTHMULTIAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DROPBOXOAUTHMULTIAuthConfig from a JSON string
+dropboxoauthmulti_auth_config_instance = DROPBOXOAUTHMULTIAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(DROPBOXOAUTHMULTIAuthConfig.to_json())
+
+# convert the object into a dict
+dropboxoauthmulti_auth_config_dict = dropboxoauthmulti_auth_config_instance.to_dict()
+# create an instance of DROPBOXOAUTHMULTIAuthConfig from a dict
+dropboxoauthmulti_auth_config_from_dict = DROPBOXOAUTHMULTIAuthConfig.from_dict(dropboxoauthmulti_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DROPBOXOAUTHMULTICUSTOMAuthConfig.md b/docs/DROPBOXOAUTHMULTICUSTOMAuthConfig.md
new file mode 100644
index 0000000..ceef340
--- /dev/null
+++ b/docs/DROPBOXOAUTHMULTICUSTOMAuthConfig.md
@@ -0,0 +1,35 @@
+# DROPBOXOAUTHMULTICUSTOMAuthConfig
+
+Authentication configuration for Dropbox Multi-User (White Label)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**app_key** | **str** | Dropbox App Key. Example: Enter App Key |
+**app_secret** | **str** | Dropbox App Secret. Example: Enter App Secret |
+**authorized_users** | **str** | Authorized Users | [optional]
+**edited_users** | **str** | | [optional]
+**deleted_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropboxoauthmulticustom_auth_config import DROPBOXOAUTHMULTICUSTOMAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DROPBOXOAUTHMULTICUSTOMAuthConfig from a JSON string
+dropboxoauthmulticustom_auth_config_instance = DROPBOXOAUTHMULTICUSTOMAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(DROPBOXOAUTHMULTICUSTOMAuthConfig.to_json())
+
+# convert the object into a dict
+dropboxoauthmulticustom_auth_config_dict = dropboxoauthmulticustom_auth_config_instance.to_dict()
+# create an instance of DROPBOXOAUTHMULTICUSTOMAuthConfig from a dict
+dropboxoauthmulticustom_auth_config_from_dict = DROPBOXOAUTHMULTICUSTOMAuthConfig.from_dict(dropboxoauthmulticustom_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Datastax.md b/docs/Datastax.md
new file mode 100644
index 0000000..c0d5791
--- /dev/null
+++ b/docs/Datastax.md
@@ -0,0 +1,31 @@
+# Datastax
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"DATASTAX\") |
+**config** | [**DATASTAXConfig**](DATASTAXConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.datastax import Datastax
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Datastax from a JSON string
+datastax_instance = Datastax.from_json(json)
+# print the JSON string representation of the object
+print(Datastax.to_json())
+
+# convert the object into a dict
+datastax_dict = datastax_instance.to_dict()
+# create an instance of Datastax from a dict
+datastax_from_dict = Datastax.from_dict(datastax_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Datastax1.md b/docs/Datastax1.md
new file mode 100644
index 0000000..3c23f1f
--- /dev/null
+++ b/docs/Datastax1.md
@@ -0,0 +1,29 @@
+# Datastax1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**DATASTAXConfig**](DATASTAXConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.datastax1 import Datastax1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Datastax1 from a JSON string
+datastax1_instance = Datastax1.from_json(json)
+# print the JSON string representation of the object
+print(Datastax1.to_json())
+
+# convert the object into a dict
+datastax1_dict = datastax1_instance.to_dict()
+# create an instance of Datastax1 from a dict
+datastax1_from_dict = Datastax1.from_dict(datastax1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Datastax2.md b/docs/Datastax2.md
new file mode 100644
index 0000000..a309855
--- /dev/null
+++ b/docs/Datastax2.md
@@ -0,0 +1,30 @@
+# Datastax2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"DATASTAX\") |
+
+## Example
+
+```python
+from vectorize_client.models.datastax2 import Datastax2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Datastax2 from a JSON string
+datastax2_instance = Datastax2.from_json(json)
+# print the JSON string representation of the object
+print(Datastax2.to_json())
+
+# convert the object into a dict
+datastax2_dict = datastax2_instance.to_dict()
+# create an instance of Datastax2 from a dict
+datastax2_from_dict = Datastax2.from_dict(datastax2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DeepResearchResult.md b/docs/DeepResearchResult.md
new file mode 100644
index 0000000..860c339
--- /dev/null
+++ b/docs/DeepResearchResult.md
@@ -0,0 +1,32 @@
+# DeepResearchResult
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**success** | **bool** | |
+**events** | **List[str]** | | [optional]
+**markdown** | **str** | | [optional]
+**error** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.deep_research_result import DeepResearchResult
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DeepResearchResult from a JSON string
+deep_research_result_instance = DeepResearchResult.from_json(json)
+# print the JSON string representation of the object
+print(DeepResearchResult.to_json())
+
+# convert the object into a dict
+deep_research_result_dict = deep_research_result_instance.to_dict()
+# create an instance of DeepResearchResult from a dict
+deep_research_result_from_dict = DeepResearchResult.from_dict(deep_research_result_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DeleteAIPlatformConnectorResponse.md b/docs/DeleteAIPlatformConnectorResponse.md
new file mode 100644
index 0000000..106ddaf
--- /dev/null
+++ b/docs/DeleteAIPlatformConnectorResponse.md
@@ -0,0 +1,29 @@
+# DeleteAIPlatformConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.delete_ai_platform_connector_response import DeleteAIPlatformConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DeleteAIPlatformConnectorResponse from a JSON string
+delete_ai_platform_connector_response_instance = DeleteAIPlatformConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(DeleteAIPlatformConnectorResponse.to_json())
+
+# convert the object into a dict
+delete_ai_platform_connector_response_dict = delete_ai_platform_connector_response_instance.to_dict()
+# create an instance of DeleteAIPlatformConnectorResponse from a dict
+delete_ai_platform_connector_response_from_dict = DeleteAIPlatformConnectorResponse.from_dict(delete_ai_platform_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DeleteDestinationConnectorResponse.md b/docs/DeleteDestinationConnectorResponse.md
new file mode 100644
index 0000000..4a80311
--- /dev/null
+++ b/docs/DeleteDestinationConnectorResponse.md
@@ -0,0 +1,29 @@
+# DeleteDestinationConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.delete_destination_connector_response import DeleteDestinationConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DeleteDestinationConnectorResponse from a JSON string
+delete_destination_connector_response_instance = DeleteDestinationConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(DeleteDestinationConnectorResponse.to_json())
+
+# convert the object into a dict
+delete_destination_connector_response_dict = delete_destination_connector_response_instance.to_dict()
+# create an instance of DeleteDestinationConnectorResponse from a dict
+delete_destination_connector_response_from_dict = DeleteDestinationConnectorResponse.from_dict(delete_destination_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DeleteFileResponse.md b/docs/DeleteFileResponse.md
new file mode 100644
index 0000000..7a3df62
--- /dev/null
+++ b/docs/DeleteFileResponse.md
@@ -0,0 +1,30 @@
+# DeleteFileResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**file_name** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.delete_file_response import DeleteFileResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DeleteFileResponse from a JSON string
+delete_file_response_instance = DeleteFileResponse.from_json(json)
+# print the JSON string representation of the object
+print(DeleteFileResponse.to_json())
+
+# convert the object into a dict
+delete_file_response_dict = delete_file_response_instance.to_dict()
+# create an instance of DeleteFileResponse from a dict
+delete_file_response_from_dict = DeleteFileResponse.from_dict(delete_file_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DeletePipelineResponse.md b/docs/DeletePipelineResponse.md
new file mode 100644
index 0000000..ab7c475
--- /dev/null
+++ b/docs/DeletePipelineResponse.md
@@ -0,0 +1,29 @@
+# DeletePipelineResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.delete_pipeline_response import DeletePipelineResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DeletePipelineResponse from a JSON string
+delete_pipeline_response_instance = DeletePipelineResponse.from_json(json)
+# print the JSON string representation of the object
+print(DeletePipelineResponse.to_json())
+
+# convert the object into a dict
+delete_pipeline_response_dict = delete_pipeline_response_instance.to_dict()
+# create an instance of DeletePipelineResponse from a dict
+delete_pipeline_response_from_dict = DeletePipelineResponse.from_dict(delete_pipeline_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DeleteSourceConnectorResponse.md b/docs/DeleteSourceConnectorResponse.md
new file mode 100644
index 0000000..c8e57e2
--- /dev/null
+++ b/docs/DeleteSourceConnectorResponse.md
@@ -0,0 +1,29 @@
+# DeleteSourceConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.delete_source_connector_response import DeleteSourceConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DeleteSourceConnectorResponse from a JSON string
+delete_source_connector_response_instance = DeleteSourceConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(DeleteSourceConnectorResponse.to_json())
+
+# convert the object into a dict
+delete_source_connector_response_dict = delete_source_connector_response_instance.to_dict()
+# create an instance of DeleteSourceConnectorResponse from a dict
+delete_source_connector_response_from_dict = DeleteSourceConnectorResponse.from_dict(delete_source_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DestinationConnector.md b/docs/DestinationConnector.md
new file mode 100644
index 0000000..62b1b7d
--- /dev/null
+++ b/docs/DestinationConnector.md
@@ -0,0 +1,39 @@
+# DestinationConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | **str** | |
+**name** | **str** | |
+**config_doc** | **Dict[str, Optional[object]]** | | [optional]
+**created_at** | **str** | | [optional]
+**created_by_id** | **str** | | [optional]
+**last_updated_by_id** | **str** | | [optional]
+**created_by_email** | **str** | | [optional]
+**last_updated_by_email** | **str** | | [optional]
+**error_message** | **str** | | [optional]
+**verification_status** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.destination_connector import DestinationConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DestinationConnector from a JSON string
+destination_connector_instance = DestinationConnector.from_json(json)
+# print the JSON string representation of the object
+print(DestinationConnector.to_json())
+
+# convert the object into a dict
+destination_connector_dict = destination_connector_instance.to_dict()
+# create an instance of DestinationConnector from a dict
+destination_connector_from_dict = DestinationConnector.from_dict(destination_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DestinationConnectorInput.md b/docs/DestinationConnectorInput.md
new file mode 100644
index 0000000..fa0450c
--- /dev/null
+++ b/docs/DestinationConnectorInput.md
@@ -0,0 +1,32 @@
+# DestinationConnectorInput
+
+Destination connector configuration
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the destination connector |
+**type** | **str** | Type of destination connector |
+**config** | [**DestinationConnectorInputConfig**](DestinationConnectorInputConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.destination_connector_input import DestinationConnectorInput
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DestinationConnectorInput from a JSON string
+destination_connector_input_instance = DestinationConnectorInput.from_json(json)
+# print the JSON string representation of the object
+print(DestinationConnectorInput.to_json())
+
+# convert the object into a dict
+destination_connector_input_dict = destination_connector_input_instance.to_dict()
+# create an instance of DestinationConnectorInput from a dict
+destination_connector_input_from_dict = DestinationConnectorInput.from_dict(destination_connector_input_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DestinationConnectorInputConfig.md b/docs/DestinationConnectorInputConfig.md
new file mode 100644
index 0000000..e60b3cd
--- /dev/null
+++ b/docs/DestinationConnectorInputConfig.md
@@ -0,0 +1,35 @@
+# DestinationConnectorInputConfig
+
+Configuration specific to the connector type
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**bucket** | **str** | Bucket Name. Example: Enter bucket name |
+**scope** | **str** | Scope Name. Example: Enter scope name |
+**collection** | **str** | Collection Name. Example: Enter collection name |
+**index** | **str** | Index Name. Example: Enter index name |
+**namespace** | **str** | Namespace. Example: Enter namespace name |
+**table** | **str** | Table Name. Example: Enter <table name> or <schema>.<table name> |
+
+## Example
+
+```python
+from vectorize_client.models.destination_connector_input_config import DestinationConnectorInputConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DestinationConnectorInputConfig from a JSON string
+destination_connector_input_config_instance = DestinationConnectorInputConfig.from_json(json)
+# print the JSON string representation of the object
+print(DestinationConnectorInputConfig.to_json())
+
+# convert the object into a dict
+destination_connector_input_config_dict = destination_connector_input_config_instance.to_dict()
+# create an instance of DestinationConnectorInputConfig from a dict
+destination_connector_input_config_from_dict = DestinationConnectorInputConfig.from_dict(destination_connector_input_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DestinationConnectorSchema.md b/docs/DestinationConnectorSchema.md
new file mode 100644
index 0000000..0c4cc7e
--- /dev/null
+++ b/docs/DestinationConnectorSchema.md
@@ -0,0 +1,31 @@
+# DestinationConnectorSchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | [**DestinationConnectorType**](DestinationConnectorType.md) | |
+**config** | **Dict[str, Optional[object]]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.destination_connector_schema import DestinationConnectorSchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DestinationConnectorSchema from a JSON string
+destination_connector_schema_instance = DestinationConnectorSchema.from_json(json)
+# print the JSON string representation of the object
+print(DestinationConnectorSchema.to_json())
+
+# convert the object into a dict
+destination_connector_schema_dict = destination_connector_schema_instance.to_dict()
+# create an instance of DestinationConnectorSchema from a dict
+destination_connector_schema_from_dict = DestinationConnectorSchema.from_dict(destination_connector_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DestinationConnectorType.md b/docs/DestinationConnectorType.md
new file mode 100644
index 0000000..82e80fa
--- /dev/null
+++ b/docs/DestinationConnectorType.md
@@ -0,0 +1,32 @@
+# DestinationConnectorType
+
+
+## Enum
+
+* `CAPELLA` (value: `'CAPELLA'`)
+
+* `DATASTAX` (value: `'DATASTAX'`)
+
+* `ELASTIC` (value: `'ELASTIC'`)
+
+* `PINECONE` (value: `'PINECONE'`)
+
+* `SINGLESTORE` (value: `'SINGLESTORE'`)
+
+* `MILVUS` (value: `'MILVUS'`)
+
+* `POSTGRESQL` (value: `'POSTGRESQL'`)
+
+* `QDRANT` (value: `'QDRANT'`)
+
+* `SUPABASE` (value: `'SUPABASE'`)
+
+* `WEAVIATE` (value: `'WEAVIATE'`)
+
+* `AZUREAISEARCH` (value: `'AZUREAISEARCH'`)
+
+* `TURBOPUFFER` (value: `'TURBOPUFFER'`)
+
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Discord.md b/docs/Discord.md
new file mode 100644
index 0000000..5eb6fef
--- /dev/null
+++ b/docs/Discord.md
@@ -0,0 +1,31 @@
+# Discord
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"DISCORD\") |
+**config** | [**DISCORDConfig**](DISCORDConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.discord import Discord
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Discord from a JSON string
+discord_instance = Discord.from_json(json)
+# print the JSON string representation of the object
+print(Discord.to_json())
+
+# convert the object into a dict
+discord_dict = discord_instance.to_dict()
+# create an instance of Discord from a dict
+discord_from_dict = Discord.from_dict(discord_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Discord1.md b/docs/Discord1.md
new file mode 100644
index 0000000..48fc98b
--- /dev/null
+++ b/docs/Discord1.md
@@ -0,0 +1,29 @@
+# Discord1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**DISCORDConfig**](DISCORDConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.discord1 import Discord1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Discord1 from a JSON string
+discord1_instance = Discord1.from_json(json)
+# print the JSON string representation of the object
+print(Discord1.to_json())
+
+# convert the object into a dict
+discord1_dict = discord1_instance.to_dict()
+# create an instance of Discord1 from a dict
+discord1_from_dict = Discord1.from_dict(discord1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Discord2.md b/docs/Discord2.md
new file mode 100644
index 0000000..da87862
--- /dev/null
+++ b/docs/Discord2.md
@@ -0,0 +1,30 @@
+# Discord2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"DISCORD\") |
+
+## Example
+
+```python
+from vectorize_client.models.discord2 import Discord2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Discord2 from a JSON string
+discord2_instance = Discord2.from_json(json)
+# print the JSON string representation of the object
+print(Discord2.to_json())
+
+# convert the object into a dict
+discord2_dict = discord2_instance.to_dict()
+# create an instance of Discord2 from a dict
+discord2_from_dict = Discord2.from_dict(discord2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Document.md b/docs/Document.md
new file mode 100644
index 0000000..da775fe
--- /dev/null
+++ b/docs/Document.md
@@ -0,0 +1,41 @@
+# Document
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**relevancy** | **float** | |
+**id** | **str** | |
+**text** | **str** | |
+**chunk_id** | **str** | |
+**total_chunks** | **str** | |
+**origin** | **str** | |
+**origin_id** | **str** | |
+**similarity** | **float** | |
+**source** | **str** | |
+**unique_source** | **str** | |
+**source_display_name** | **str** | |
+**pipeline_id** | **str** | | [optional]
+**org_id** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.document import Document
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Document from a JSON string
+document_instance = Document.from_json(json)
+# print the JSON string representation of the object
+print(Document.to_json())
+
+# convert the object into a dict
+document_dict = document_instance.to_dict()
+# create an instance of Document from a dict
+document_from_dict = Document.from_dict(document_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Dropbox.md b/docs/Dropbox.md
new file mode 100644
index 0000000..0e40063
--- /dev/null
+++ b/docs/Dropbox.md
@@ -0,0 +1,31 @@
+# Dropbox
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"DROPBOX\") |
+**config** | [**DROPBOXConfig**](DROPBOXConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox import Dropbox
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Dropbox from a JSON string
+dropbox_instance = Dropbox.from_json(json)
+# print the JSON string representation of the object
+print(Dropbox.to_json())
+
+# convert the object into a dict
+dropbox_dict = dropbox_instance.to_dict()
+# create an instance of Dropbox from a dict
+dropbox_from_dict = Dropbox.from_dict(dropbox_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Dropbox1.md b/docs/Dropbox1.md
new file mode 100644
index 0000000..762609e
--- /dev/null
+++ b/docs/Dropbox1.md
@@ -0,0 +1,29 @@
+# Dropbox1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**DROPBOXConfig**](DROPBOXConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropbox1 import Dropbox1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Dropbox1 from a JSON string
+dropbox1_instance = Dropbox1.from_json(json)
+# print the JSON string representation of the object
+print(Dropbox1.to_json())
+
+# convert the object into a dict
+dropbox1_dict = dropbox1_instance.to_dict()
+# create an instance of Dropbox1 from a dict
+dropbox1_from_dict = Dropbox1.from_dict(dropbox1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Dropbox2.md b/docs/Dropbox2.md
new file mode 100644
index 0000000..2019ff7
--- /dev/null
+++ b/docs/Dropbox2.md
@@ -0,0 +1,30 @@
+# Dropbox2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"DROPBOX\") |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox2 import Dropbox2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Dropbox2 from a JSON string
+dropbox2_instance = Dropbox2.from_json(json)
+# print the JSON string representation of the object
+print(Dropbox2.to_json())
+
+# convert the object into a dict
+dropbox2_dict = dropbox2_instance.to_dict()
+# create an instance of Dropbox2 from a dict
+dropbox2_from_dict = Dropbox2.from_dict(dropbox2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauth.md b/docs/DropboxOauth.md
new file mode 100644
index 0000000..78d2fa2
--- /dev/null
+++ b/docs/DropboxOauth.md
@@ -0,0 +1,31 @@
+# DropboxOauth
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"DROPBOX_OAUTH\") |
+**config** | [**DROPBOXOAUTHAuthConfig**](DROPBOXOAUTHAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth import DropboxOauth
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauth from a JSON string
+dropbox_oauth_instance = DropboxOauth.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauth.to_json())
+
+# convert the object into a dict
+dropbox_oauth_dict = dropbox_oauth_instance.to_dict()
+# create an instance of DropboxOauth from a dict
+dropbox_oauth_from_dict = DropboxOauth.from_dict(dropbox_oauth_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauth1.md b/docs/DropboxOauth1.md
new file mode 100644
index 0000000..89e5f49
--- /dev/null
+++ b/docs/DropboxOauth1.md
@@ -0,0 +1,29 @@
+# DropboxOauth1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**DROPBOXOAUTHAuthConfig**](DROPBOXOAUTHAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth1 import DropboxOauth1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauth1 from a JSON string
+dropbox_oauth1_instance = DropboxOauth1.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauth1.to_json())
+
+# convert the object into a dict
+dropbox_oauth1_dict = dropbox_oauth1_instance.to_dict()
+# create an instance of DropboxOauth1 from a dict
+dropbox_oauth1_from_dict = DropboxOauth1.from_dict(dropbox_oauth1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauth2.md b/docs/DropboxOauth2.md
new file mode 100644
index 0000000..ae6a141
--- /dev/null
+++ b/docs/DropboxOauth2.md
@@ -0,0 +1,30 @@
+# DropboxOauth2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"DROPBOX_OAUTH\") |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth2 import DropboxOauth2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauth2 from a JSON string
+dropbox_oauth2_instance = DropboxOauth2.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauth2.to_json())
+
+# convert the object into a dict
+dropbox_oauth2_dict = dropbox_oauth2_instance.to_dict()
+# create an instance of DropboxOauth2 from a dict
+dropbox_oauth2_from_dict = DropboxOauth2.from_dict(dropbox_oauth2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauthMulti.md b/docs/DropboxOauthMulti.md
new file mode 100644
index 0000000..dc590e7
--- /dev/null
+++ b/docs/DropboxOauthMulti.md
@@ -0,0 +1,31 @@
+# DropboxOauthMulti
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"DROPBOX_OAUTH_MULTI\") |
+**config** | [**DROPBOXOAUTHMULTIAuthConfig**](DROPBOXOAUTHMULTIAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth_multi import DropboxOauthMulti
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauthMulti from a JSON string
+dropbox_oauth_multi_instance = DropboxOauthMulti.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauthMulti.to_json())
+
+# convert the object into a dict
+dropbox_oauth_multi_dict = dropbox_oauth_multi_instance.to_dict()
+# create an instance of DropboxOauthMulti from a dict
+dropbox_oauth_multi_from_dict = DropboxOauthMulti.from_dict(dropbox_oauth_multi_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauthMulti1.md b/docs/DropboxOauthMulti1.md
new file mode 100644
index 0000000..bd53c6f
--- /dev/null
+++ b/docs/DropboxOauthMulti1.md
@@ -0,0 +1,29 @@
+# DropboxOauthMulti1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**DROPBOXOAUTHMULTIAuthConfig**](DROPBOXOAUTHMULTIAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth_multi1 import DropboxOauthMulti1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauthMulti1 from a JSON string
+dropbox_oauth_multi1_instance = DropboxOauthMulti1.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauthMulti1.to_json())
+
+# convert the object into a dict
+dropbox_oauth_multi1_dict = dropbox_oauth_multi1_instance.to_dict()
+# create an instance of DropboxOauthMulti1 from a dict
+dropbox_oauth_multi1_from_dict = DropboxOauthMulti1.from_dict(dropbox_oauth_multi1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauthMulti2.md b/docs/DropboxOauthMulti2.md
new file mode 100644
index 0000000..7f52cc9
--- /dev/null
+++ b/docs/DropboxOauthMulti2.md
@@ -0,0 +1,30 @@
+# DropboxOauthMulti2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"DROPBOX_OAUTH_MULTI\") |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth_multi2 import DropboxOauthMulti2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauthMulti2 from a JSON string
+dropbox_oauth_multi2_instance = DropboxOauthMulti2.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauthMulti2.to_json())
+
+# convert the object into a dict
+dropbox_oauth_multi2_dict = dropbox_oauth_multi2_instance.to_dict()
+# create an instance of DropboxOauthMulti2 from a dict
+dropbox_oauth_multi2_from_dict = DropboxOauthMulti2.from_dict(dropbox_oauth_multi2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauthMultiCustom.md b/docs/DropboxOauthMultiCustom.md
new file mode 100644
index 0000000..7ad3444
--- /dev/null
+++ b/docs/DropboxOauthMultiCustom.md
@@ -0,0 +1,31 @@
+# DropboxOauthMultiCustom
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"DROPBOX_OAUTH_MULTI_CUSTOM\") |
+**config** | [**DROPBOXOAUTHMULTICUSTOMAuthConfig**](DROPBOXOAUTHMULTICUSTOMAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth_multi_custom import DropboxOauthMultiCustom
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauthMultiCustom from a JSON string
+dropbox_oauth_multi_custom_instance = DropboxOauthMultiCustom.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauthMultiCustom.to_json())
+
+# convert the object into a dict
+dropbox_oauth_multi_custom_dict = dropbox_oauth_multi_custom_instance.to_dict()
+# create an instance of DropboxOauthMultiCustom from a dict
+dropbox_oauth_multi_custom_from_dict = DropboxOauthMultiCustom.from_dict(dropbox_oauth_multi_custom_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauthMultiCustom1.md b/docs/DropboxOauthMultiCustom1.md
new file mode 100644
index 0000000..ed825be
--- /dev/null
+++ b/docs/DropboxOauthMultiCustom1.md
@@ -0,0 +1,29 @@
+# DropboxOauthMultiCustom1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**DROPBOXOAUTHMULTICUSTOMAuthConfig**](DROPBOXOAUTHMULTICUSTOMAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth_multi_custom1 import DropboxOauthMultiCustom1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauthMultiCustom1 from a JSON string
+dropbox_oauth_multi_custom1_instance = DropboxOauthMultiCustom1.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauthMultiCustom1.to_json())
+
+# convert the object into a dict
+dropbox_oauth_multi_custom1_dict = dropbox_oauth_multi_custom1_instance.to_dict()
+# create an instance of DropboxOauthMultiCustom1 from a dict
+dropbox_oauth_multi_custom1_from_dict = DropboxOauthMultiCustom1.from_dict(dropbox_oauth_multi_custom1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/DropboxOauthMultiCustom2.md b/docs/DropboxOauthMultiCustom2.md
new file mode 100644
index 0000000..ef7a0e1
--- /dev/null
+++ b/docs/DropboxOauthMultiCustom2.md
@@ -0,0 +1,30 @@
+# DropboxOauthMultiCustom2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"DROPBOX_OAUTH_MULTI_CUSTOM\") |
+
+## Example
+
+```python
+from vectorize_client.models.dropbox_oauth_multi_custom2 import DropboxOauthMultiCustom2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of DropboxOauthMultiCustom2 from a JSON string
+dropbox_oauth_multi_custom2_instance = DropboxOauthMultiCustom2.from_json(json)
+# print the JSON string representation of the object
+print(DropboxOauthMultiCustom2.to_json())
+
+# convert the object into a dict
+dropbox_oauth_multi_custom2_dict = dropbox_oauth_multi_custom2_instance.to_dict()
+# create an instance of DropboxOauthMultiCustom2 from a dict
+dropbox_oauth_multi_custom2_from_dict = DropboxOauthMultiCustom2.from_dict(dropbox_oauth_multi_custom2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ELASTICAuthConfig.md b/docs/ELASTICAuthConfig.md
new file mode 100644
index 0000000..d6197e1
--- /dev/null
+++ b/docs/ELASTICAuthConfig.md
@@ -0,0 +1,33 @@
+# ELASTICAuthConfig
+
+Authentication configuration for Elasticsearch
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Elastic integration |
+**host** | **str** | Host. Example: Enter your host |
+**port** | **str** | Port. Example: Enter your port |
+**api_key** | **str** | API Key. Example: Enter your API key |
+
+## Example
+
+```python
+from vectorize_client.models.elastic_auth_config import ELASTICAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ELASTICAuthConfig from a JSON string
+elastic_auth_config_instance = ELASTICAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(ELASTICAuthConfig.to_json())
+
+# convert the object into a dict
+elastic_auth_config_dict = elastic_auth_config_instance.to_dict()
+# create an instance of ELASTICAuthConfig from a dict
+elastic_auth_config_from_dict = ELASTICAuthConfig.from_dict(elastic_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ELASTICConfig.md b/docs/ELASTICConfig.md
new file mode 100644
index 0000000..410d44d
--- /dev/null
+++ b/docs/ELASTICConfig.md
@@ -0,0 +1,30 @@
+# ELASTICConfig
+
+Configuration for Elasticsearch connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**index** | **str** | Index Name. Example: Enter index name |
+
+## Example
+
+```python
+from vectorize_client.models.elastic_config import ELASTICConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ELASTICConfig from a JSON string
+elastic_config_instance = ELASTICConfig.from_json(json)
+# print the JSON string representation of the object
+print(ELASTICConfig.to_json())
+
+# convert the object into a dict
+elastic_config_dict = elastic_config_instance.to_dict()
+# create an instance of ELASTICConfig from a dict
+elastic_config_from_dict = ELASTICConfig.from_dict(elastic_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Elastic.md b/docs/Elastic.md
new file mode 100644
index 0000000..dc04310
--- /dev/null
+++ b/docs/Elastic.md
@@ -0,0 +1,31 @@
+# Elastic
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"ELASTIC\") |
+**config** | [**ELASTICConfig**](ELASTICConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.elastic import Elastic
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Elastic from a JSON string
+elastic_instance = Elastic.from_json(json)
+# print the JSON string representation of the object
+print(Elastic.to_json())
+
+# convert the object into a dict
+elastic_dict = elastic_instance.to_dict()
+# create an instance of Elastic from a dict
+elastic_from_dict = Elastic.from_dict(elastic_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Elastic1.md b/docs/Elastic1.md
new file mode 100644
index 0000000..515a0a0
--- /dev/null
+++ b/docs/Elastic1.md
@@ -0,0 +1,29 @@
+# Elastic1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**ELASTICConfig**](ELASTICConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.elastic1 import Elastic1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Elastic1 from a JSON string
+elastic1_instance = Elastic1.from_json(json)
+# print the JSON string representation of the object
+print(Elastic1.to_json())
+
+# convert the object into a dict
+elastic1_dict = elastic1_instance.to_dict()
+# create an instance of Elastic1 from a dict
+elastic1_from_dict = Elastic1.from_dict(elastic1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Elastic2.md b/docs/Elastic2.md
new file mode 100644
index 0000000..ef94cf1
--- /dev/null
+++ b/docs/Elastic2.md
@@ -0,0 +1,30 @@
+# Elastic2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"ELASTIC\") |
+
+## Example
+
+```python
+from vectorize_client.models.elastic2 import Elastic2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Elastic2 from a JSON string
+elastic2_instance = Elastic2.from_json(json)
+# print the JSON string representation of the object
+print(Elastic2.to_json())
+
+# convert the object into a dict
+elastic2_dict = elastic2_instance.to_dict()
+# create an instance of Elastic2 from a dict
+elastic2_from_dict = Elastic2.from_dict(elastic2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ExtractionApi.md b/docs/ExtractionApi.md
new file mode 100644
index 0000000..6fea47f
--- /dev/null
+++ b/docs/ExtractionApi.md
@@ -0,0 +1,177 @@
+# vectorize_client.ExtractionApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**get_extraction_result**](ExtractionApi.md#get_extraction_result) | **GET** /org/{organizationId}/extraction/{extractionId} | Get extraction result
+[**start_extraction**](ExtractionApi.md#start_extraction) | **POST** /org/{organizationId}/extraction | Start content extraction from a file
+
+
+# **get_extraction_result**
+> ExtractionResultResponse get_extraction_result(organization, extraction_id)
+
+Get extraction result
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.extraction_result_response import ExtractionResultResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ExtractionApi(api_client)
+ organization = 'organization_example' # str |
+ extraction_id = 'extraction_id_example' # str |
+
+ try:
+ # Get extraction result
+ api_response = api_instance.get_extraction_result(organization, extraction_id)
+ print("The response of ExtractionApi->get_extraction_result:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ExtractionApi->get_extraction_result: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **extraction_id** | **str**| |
+
+### Return type
+
+[**ExtractionResultResponse**](ExtractionResultResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Extraction started successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **start_extraction**
+> StartExtractionResponse start_extraction(organization_id, start_extraction_request)
+
+Start content extraction from a file
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.start_extraction_request import StartExtractionRequest
+from vectorize_client.models.start_extraction_response import StartExtractionResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.ExtractionApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ start_extraction_request = {"fileId":"2a53d7fa-748a-4b7f-a35b-e5f73944f444","type":"iris","chunkingStrategy":"markdown","chunkSize":20,"metadata":{"schemas":[],"inferSchema":true}} # StartExtractionRequest |
+
+ try:
+ # Start content extraction from a file
+ api_response = api_instance.start_extraction(organization_id, start_extraction_request)
+ print("The response of ExtractionApi->start_extraction:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling ExtractionApi->start_extraction: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **start_extraction_request** | [**StartExtractionRequest**](StartExtractionRequest.md)| |
+
+### Return type
+
+[**StartExtractionResponse**](StartExtractionResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Extraction started successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/ExtractionChunkingStrategy.md b/docs/ExtractionChunkingStrategy.md
new file mode 100644
index 0000000..7eace7c
--- /dev/null
+++ b/docs/ExtractionChunkingStrategy.md
@@ -0,0 +1,10 @@
+# ExtractionChunkingStrategy
+
+
+## Enum
+
+* `MARKDOWN` (value: `'markdown'`)
+
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ExtractionResult.md b/docs/ExtractionResult.md
new file mode 100644
index 0000000..74fa48c
--- /dev/null
+++ b/docs/ExtractionResult.md
@@ -0,0 +1,36 @@
+# ExtractionResult
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**success** | **bool** | |
+**chunks** | **List[str]** | | [optional]
+**text** | **str** | | [optional]
+**metadata** | **str** | | [optional]
+**metadata_schema** | **str** | | [optional]
+**chunks_metadata** | **List[str]** | | [optional]
+**chunks_schema** | **List[str]** | | [optional]
+**error** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.extraction_result import ExtractionResult
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ExtractionResult from a JSON string
+extraction_result_instance = ExtractionResult.from_json(json)
+# print the JSON string representation of the object
+print(ExtractionResult.to_json())
+
+# convert the object into a dict
+extraction_result_dict = extraction_result_instance.to_dict()
+# create an instance of ExtractionResult from a dict
+extraction_result_from_dict = ExtractionResult.from_dict(extraction_result_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ExtractionResultResponse.md b/docs/ExtractionResultResponse.md
new file mode 100644
index 0000000..6fecd57
--- /dev/null
+++ b/docs/ExtractionResultResponse.md
@@ -0,0 +1,30 @@
+# ExtractionResultResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**ready** | **bool** | |
+**data** | [**ExtractionResult**](ExtractionResult.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.extraction_result_response import ExtractionResultResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ExtractionResultResponse from a JSON string
+extraction_result_response_instance = ExtractionResultResponse.from_json(json)
+# print the JSON string representation of the object
+print(ExtractionResultResponse.to_json())
+
+# convert the object into a dict
+extraction_result_response_dict = extraction_result_response_instance.to_dict()
+# create an instance of ExtractionResultResponse from a dict
+extraction_result_response_from_dict = ExtractionResultResponse.from_dict(extraction_result_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ExtractionType.md b/docs/ExtractionType.md
new file mode 100644
index 0000000..56b654e
--- /dev/null
+++ b/docs/ExtractionType.md
@@ -0,0 +1,10 @@
+# ExtractionType
+
+
+## Enum
+
+* `IRIS` (value: `'iris'`)
+
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FILEUPLOADAuthConfig.md b/docs/FILEUPLOADAuthConfig.md
new file mode 100644
index 0000000..52b68db
--- /dev/null
+++ b/docs/FILEUPLOADAuthConfig.md
@@ -0,0 +1,32 @@
+# FILEUPLOADAuthConfig
+
+Authentication configuration for File Upload
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for this connector |
+**path_prefix** | **str** | Path Prefix | [optional]
+**files** | **List[str]** | Choose files. Files uploaded to this connector can be used in pipelines to vectorize their contents. Note: files with the same name will be overwritten. | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.fileupload_auth_config import FILEUPLOADAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FILEUPLOADAuthConfig from a JSON string
+fileupload_auth_config_instance = FILEUPLOADAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(FILEUPLOADAuthConfig.to_json())
+
+# convert the object into a dict
+fileupload_auth_config_dict = fileupload_auth_config_instance.to_dict()
+# create an instance of FILEUPLOADAuthConfig from a dict
+fileupload_auth_config_from_dict = FILEUPLOADAuthConfig.from_dict(fileupload_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FIRECRAWLAuthConfig.md b/docs/FIRECRAWLAuthConfig.md
new file mode 100644
index 0000000..1a680e6
--- /dev/null
+++ b/docs/FIRECRAWLAuthConfig.md
@@ -0,0 +1,31 @@
+# FIRECRAWLAuthConfig
+
+Authentication configuration for Firecrawl
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**api_key** | **str** | API Key. Example: Enter your Firecrawl API Key |
+
+## Example
+
+```python
+from vectorize_client.models.firecrawl_auth_config import FIRECRAWLAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FIRECRAWLAuthConfig from a JSON string
+firecrawl_auth_config_instance = FIRECRAWLAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(FIRECRAWLAuthConfig.to_json())
+
+# convert the object into a dict
+firecrawl_auth_config_dict = firecrawl_auth_config_instance.to_dict()
+# create an instance of FIRECRAWLAuthConfig from a dict
+firecrawl_auth_config_from_dict = FIRECRAWLAuthConfig.from_dict(firecrawl_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FIRECRAWLConfig.md b/docs/FIRECRAWLConfig.md
new file mode 100644
index 0000000..aa4ad0a
--- /dev/null
+++ b/docs/FIRECRAWLConfig.md
@@ -0,0 +1,31 @@
+# FIRECRAWLConfig
+
+Configuration for Firecrawl connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**endpoint** | **str** | Endpoint. Example: Choose which api endpoint to use | [default to 'Crawl']
+**request** | **object** | Request Body. Example: JSON config for firecrawl's /crawl or /scrape endpoint. |
+
+## Example
+
+```python
+from vectorize_client.models.firecrawl_config import FIRECRAWLConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FIRECRAWLConfig from a JSON string
+firecrawl_config_instance = FIRECRAWLConfig.from_json(json)
+# print the JSON string representation of the object
+print(FIRECRAWLConfig.to_json())
+
+# convert the object into a dict
+firecrawl_config_dict = firecrawl_config_instance.to_dict()
+# create an instance of FIRECRAWLConfig from a dict
+firecrawl_config_from_dict = FIRECRAWLConfig.from_dict(firecrawl_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FIREFLIESAuthConfig.md b/docs/FIREFLIESAuthConfig.md
new file mode 100644
index 0000000..775c8ef
--- /dev/null
+++ b/docs/FIREFLIESAuthConfig.md
@@ -0,0 +1,31 @@
+# FIREFLIESAuthConfig
+
+Authentication configuration for Fireflies.ai
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**api_key** | **str** | API Key. Example: Enter your Fireflies.ai API key |
+
+## Example
+
+```python
+from vectorize_client.models.fireflies_auth_config import FIREFLIESAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FIREFLIESAuthConfig from a JSON string
+fireflies_auth_config_instance = FIREFLIESAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(FIREFLIESAuthConfig.to_json())
+
+# convert the object into a dict
+fireflies_auth_config_dict = fireflies_auth_config_instance.to_dict()
+# create an instance of FIREFLIESAuthConfig from a dict
+fireflies_auth_config_from_dict = FIREFLIESAuthConfig.from_dict(fireflies_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FIREFLIESConfig.md b/docs/FIREFLIESConfig.md
new file mode 100644
index 0000000..b36e4ba
--- /dev/null
+++ b/docs/FIREFLIESConfig.md
@@ -0,0 +1,36 @@
+# FIREFLIESConfig
+
+Configuration for Fireflies.ai connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**start_date** | **date** | Start Date. Include meetings from this date forward. Example: Enter a date: Example 2023-12-31 |
+**end_date** | **date** | End Date. Include meetings up to this date only. Example: Enter a date: Example 2023-12-31 | [optional]
+**title_filter_type** | **str** | | [default to 'AND']
+**title_filter** | **str** | Title Filter. Only include meetings with this text in the title. Example: Enter meeting title | [optional]
+**participant_filter_type** | **str** | | [default to 'AND']
+**participant_filter** | **str** | Participant's Email Filter. Include meetings where these participants were invited. Example: Enter participant email | [optional]
+**max_meetings** | **float** | Max Meetings. Enter -1 for all available meetings, or specify a limit. Example: Enter maximum number of meetings to retrieve. (-1 for all) | [optional] [default to -1]
+
+## Example
+
+```python
+from vectorize_client.models.fireflies_config import FIREFLIESConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FIREFLIESConfig from a JSON string
+fireflies_config_instance = FIREFLIESConfig.from_json(json)
+# print the JSON string representation of the object
+print(FIREFLIESConfig.to_json())
+
+# convert the object into a dict
+fireflies_config_dict = fireflies_config_instance.to_dict()
+# create an instance of FIREFLIESConfig from a dict
+fireflies_config_from_dict = FIREFLIESConfig.from_dict(fireflies_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FileUpload.md b/docs/FileUpload.md
new file mode 100644
index 0000000..184abf6
--- /dev/null
+++ b/docs/FileUpload.md
@@ -0,0 +1,31 @@
+# FileUpload
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"FILE_UPLOAD\") |
+**config** | [**FILEUPLOADAuthConfig**](FILEUPLOADAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.file_upload import FileUpload
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FileUpload from a JSON string
+file_upload_instance = FileUpload.from_json(json)
+# print the JSON string representation of the object
+print(FileUpload.to_json())
+
+# convert the object into a dict
+file_upload_dict = file_upload_instance.to_dict()
+# create an instance of FileUpload from a dict
+file_upload_from_dict = FileUpload.from_dict(file_upload_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FileUpload1.md b/docs/FileUpload1.md
new file mode 100644
index 0000000..32d0063
--- /dev/null
+++ b/docs/FileUpload1.md
@@ -0,0 +1,29 @@
+# FileUpload1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**FILEUPLOADAuthConfig**](FILEUPLOADAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.file_upload1 import FileUpload1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FileUpload1 from a JSON string
+file_upload1_instance = FileUpload1.from_json(json)
+# print the JSON string representation of the object
+print(FileUpload1.to_json())
+
+# convert the object into a dict
+file_upload1_dict = file_upload1_instance.to_dict()
+# create an instance of FileUpload1 from a dict
+file_upload1_from_dict = FileUpload1.from_dict(file_upload1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FileUpload2.md b/docs/FileUpload2.md
new file mode 100644
index 0000000..f3f53ec
--- /dev/null
+++ b/docs/FileUpload2.md
@@ -0,0 +1,30 @@
+# FileUpload2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"FILE_UPLOAD\") |
+
+## Example
+
+```python
+from vectorize_client.models.file_upload2 import FileUpload2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of FileUpload2 from a JSON string
+file_upload2_instance = FileUpload2.from_json(json)
+# print the JSON string representation of the object
+print(FileUpload2.to_json())
+
+# convert the object into a dict
+file_upload2_dict = file_upload2_instance.to_dict()
+# create an instance of FileUpload2 from a dict
+file_upload2_from_dict = FileUpload2.from_dict(file_upload2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/FilesApi.md b/docs/FilesApi.md
new file mode 100644
index 0000000..7b63d57
--- /dev/null
+++ b/docs/FilesApi.md
@@ -0,0 +1,93 @@
+# vectorize_client.FilesApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**start_file_upload**](FilesApi.md#start_file_upload) | **POST** /org/{organizationId}/files | Upload a generic file to the platform
+
+
+# **start_file_upload**
+> StartFileUploadResponse start_file_upload(organization_id, start_file_upload_request)
+
+Upload a generic file to the platform
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.start_file_upload_request import StartFileUploadRequest
+from vectorize_client.models.start_file_upload_response import StartFileUploadResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.FilesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ start_file_upload_request = {"name":"My StartFileUploadRequest","contentType":"document"} # StartFileUploadRequest |
+
+ try:
+ # Upload a generic file to the platform
+ api_response = api_instance.start_file_upload(organization_id, start_file_upload_request)
+ print("The response of FilesApi->start_file_upload:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling FilesApi->start_file_upload: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **start_file_upload_request** | [**StartFileUploadRequest**](StartFileUploadRequest.md)| |
+
+### Return type
+
+[**StartFileUploadResponse**](StartFileUploadResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | File upload started successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/Firecrawl.md b/docs/Firecrawl.md
new file mode 100644
index 0000000..4763f05
--- /dev/null
+++ b/docs/Firecrawl.md
@@ -0,0 +1,31 @@
+# Firecrawl
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"FIRECRAWL\") |
+**config** | [**FIRECRAWLConfig**](FIRECRAWLConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.firecrawl import Firecrawl
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Firecrawl from a JSON string
+firecrawl_instance = Firecrawl.from_json(json)
+# print the JSON string representation of the object
+print(Firecrawl.to_json())
+
+# convert the object into a dict
+firecrawl_dict = firecrawl_instance.to_dict()
+# create an instance of Firecrawl from a dict
+firecrawl_from_dict = Firecrawl.from_dict(firecrawl_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Firecrawl1.md b/docs/Firecrawl1.md
new file mode 100644
index 0000000..74267db
--- /dev/null
+++ b/docs/Firecrawl1.md
@@ -0,0 +1,29 @@
+# Firecrawl1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**FIRECRAWLConfig**](FIRECRAWLConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.firecrawl1 import Firecrawl1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Firecrawl1 from a JSON string
+firecrawl1_instance = Firecrawl1.from_json(json)
+# print the JSON string representation of the object
+print(Firecrawl1.to_json())
+
+# convert the object into a dict
+firecrawl1_dict = firecrawl1_instance.to_dict()
+# create an instance of Firecrawl1 from a dict
+firecrawl1_from_dict = Firecrawl1.from_dict(firecrawl1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Firecrawl2.md b/docs/Firecrawl2.md
new file mode 100644
index 0000000..4c199c3
--- /dev/null
+++ b/docs/Firecrawl2.md
@@ -0,0 +1,30 @@
+# Firecrawl2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"FIRECRAWL\") |
+
+## Example
+
+```python
+from vectorize_client.models.firecrawl2 import Firecrawl2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Firecrawl2 from a JSON string
+firecrawl2_instance = Firecrawl2.from_json(json)
+# print the JSON string representation of the object
+print(Firecrawl2.to_json())
+
+# convert the object into a dict
+firecrawl2_dict = firecrawl2_instance.to_dict()
+# create an instance of Firecrawl2 from a dict
+firecrawl2_from_dict = Firecrawl2.from_dict(firecrawl2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Fireflies.md b/docs/Fireflies.md
new file mode 100644
index 0000000..2b0ed57
--- /dev/null
+++ b/docs/Fireflies.md
@@ -0,0 +1,31 @@
+# Fireflies
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"FIREFLIES\") |
+**config** | [**FIREFLIESConfig**](FIREFLIESConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.fireflies import Fireflies
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Fireflies from a JSON string
+fireflies_instance = Fireflies.from_json(json)
+# print the JSON string representation of the object
+print(Fireflies.to_json())
+
+# convert the object into a dict
+fireflies_dict = fireflies_instance.to_dict()
+# create an instance of Fireflies from a dict
+fireflies_from_dict = Fireflies.from_dict(fireflies_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Fireflies1.md b/docs/Fireflies1.md
new file mode 100644
index 0000000..7e6107c
--- /dev/null
+++ b/docs/Fireflies1.md
@@ -0,0 +1,29 @@
+# Fireflies1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**FIREFLIESConfig**](FIREFLIESConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.fireflies1 import Fireflies1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Fireflies1 from a JSON string
+fireflies1_instance = Fireflies1.from_json(json)
+# print the JSON string representation of the object
+print(Fireflies1.to_json())
+
+# convert the object into a dict
+fireflies1_dict = fireflies1_instance.to_dict()
+# create an instance of Fireflies1 from a dict
+fireflies1_from_dict = Fireflies1.from_dict(fireflies1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Fireflies2.md b/docs/Fireflies2.md
new file mode 100644
index 0000000..0fe9bac
--- /dev/null
+++ b/docs/Fireflies2.md
@@ -0,0 +1,30 @@
+# Fireflies2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"FIREFLIES\") |
+
+## Example
+
+```python
+from vectorize_client.models.fireflies2 import Fireflies2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Fireflies2 from a JSON string
+fireflies2_instance = Fireflies2.from_json(json)
+# print the JSON string representation of the object
+print(Fireflies2.to_json())
+
+# convert the object into a dict
+fireflies2_dict = fireflies2_instance.to_dict()
+# create an instance of Fireflies2 from a dict
+fireflies2_from_dict = Fireflies2.from_dict(fireflies2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GCSAuthConfig.md b/docs/GCSAuthConfig.md
new file mode 100644
index 0000000..d3dedf2
--- /dev/null
+++ b/docs/GCSAuthConfig.md
@@ -0,0 +1,32 @@
+# GCSAuthConfig
+
+Authentication configuration for GCP Cloud Storage
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**service_account_json** | **str** | Service Account JSON. Example: Enter the JSON key file for the service account |
+**bucket_name** | **str** | Bucket. Example: Enter bucket name |
+
+## Example
+
+```python
+from vectorize_client.models.gcs_auth_config import GCSAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GCSAuthConfig from a JSON string
+gcs_auth_config_instance = GCSAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(GCSAuthConfig.to_json())
+
+# convert the object into a dict
+gcs_auth_config_dict = gcs_auth_config_instance.to_dict()
+# create an instance of GCSAuthConfig from a dict
+gcs_auth_config_from_dict = GCSAuthConfig.from_dict(gcs_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GCSConfig.md b/docs/GCSConfig.md
new file mode 100644
index 0000000..77031f5
--- /dev/null
+++ b/docs/GCSConfig.md
@@ -0,0 +1,35 @@
+# GCSConfig
+
+Configuration for GCP Cloud Storage connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Check for updates every (seconds) | [default to 5]
+**recursive** | **bool** | Recursively scan all folders in the bucket | [optional]
+**path_prefix** | **str** | Path Prefix | [optional]
+**path_metadata_regex** | **str** | Path Metadata Regex | [optional]
+**path_regex_group_names** | **str** | Path Regex Group Names. Example: Enter Group Name | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.gcs_config import GCSConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GCSConfig from a JSON string
+gcs_config_instance = GCSConfig.from_json(json)
+# print the JSON string representation of the object
+print(GCSConfig.to_json())
+
+# convert the object into a dict
+gcs_config_dict = gcs_config_instance.to_dict()
+# create an instance of GCSConfig from a dict
+gcs_config_from_dict = GCSConfig.from_dict(gcs_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GITHUBAuthConfig.md b/docs/GITHUBAuthConfig.md
new file mode 100644
index 0000000..dbb525f
--- /dev/null
+++ b/docs/GITHUBAuthConfig.md
@@ -0,0 +1,31 @@
+# GITHUBAuthConfig
+
+Authentication configuration for GitHub
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**oauth_token** | **str** | Personal Access Token. Example: Enter your GitHub personal access token |
+
+## Example
+
+```python
+from vectorize_client.models.github_auth_config import GITHUBAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GITHUBAuthConfig from a JSON string
+github_auth_config_instance = GITHUBAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(GITHUBAuthConfig.to_json())
+
+# convert the object into a dict
+github_auth_config_dict = github_auth_config_instance.to_dict()
+# create an instance of GITHUBAuthConfig from a dict
+github_auth_config_from_dict = GITHUBAuthConfig.from_dict(github_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GITHUBConfig.md b/docs/GITHUBConfig.md
new file mode 100644
index 0000000..20ddaec
--- /dev/null
+++ b/docs/GITHUBConfig.md
@@ -0,0 +1,38 @@
+# GITHUBConfig
+
+Configuration for GitHub connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**repositories** | **str** | Repositories. Example: Example: owner1/repo1 |
+**include_pull_requests** | **bool** | Include Pull Requests | [default to True]
+**pull_request_status** | **str** | Pull Request Status | [default to 'all']
+**pull_request_labels** | **str** | Pull Request Labels. Example: Optionally filter by label. E.g. fix | [optional]
+**include_issues** | **bool** | Include Issues | [default to True]
+**issue_status** | **str** | Issue Status | [default to 'all']
+**issue_labels** | **str** | Issue Labels. Example: Optionally filter by label. E.g. bug | [optional]
+**max_items** | **float** | Max Items. Example: Enter maximum number of items to fetch | [default to 1000]
+**created_after** | **date** | Created After. Filter for items created after this date. Example: Enter a date: Example 2012-12-31 | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.github_config import GITHUBConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GITHUBConfig from a JSON string
+github_config_instance = GITHUBConfig.from_json(json)
+# print the JSON string representation of the object
+print(GITHUBConfig.to_json())
+
+# convert the object into a dict
+github_config_dict = github_config_instance.to_dict()
+# create an instance of GITHUBConfig from a dict
+github_config_from_dict = GITHUBConfig.from_dict(github_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEAuthConfig.md b/docs/GOOGLEDRIVEAuthConfig.md
new file mode 100644
index 0000000..aec10ce
--- /dev/null
+++ b/docs/GOOGLEDRIVEAuthConfig.md
@@ -0,0 +1,31 @@
+# GOOGLEDRIVEAuthConfig
+
+Authentication configuration for Google Drive (Service Account)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**service_account_json** | **str** | Service Account JSON. Example: Enter the JSON key file for the service account |
+
+## Example
+
+```python
+from vectorize_client.models.googledrive_auth_config import GOOGLEDRIVEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEAuthConfig from a JSON string
+googledrive_auth_config_instance = GOOGLEDRIVEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEAuthConfig.to_json())
+
+# convert the object into a dict
+googledrive_auth_config_dict = googledrive_auth_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEAuthConfig from a dict
+googledrive_auth_config_from_dict = GOOGLEDRIVEAuthConfig.from_dict(googledrive_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEConfig.md b/docs/GOOGLEDRIVEConfig.md
new file mode 100644
index 0000000..1b1572c
--- /dev/null
+++ b/docs/GOOGLEDRIVEConfig.md
@@ -0,0 +1,32 @@
+# GOOGLEDRIVEConfig
+
+Configuration for Google Drive (Service Account) connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**root_parents** | **str** | Restrict ingest to these folder URLs (optional). Example: Enter Folder URLs. Example: https://drive.google.com/drive/folders/1234aBCd5678_eFgH9012iJKL3456opqr | [optional]
+**idle_time** | **float** | Polling Interval (seconds). Example: Enter polling interval in seconds | [optional] [default to 5]
+
+## Example
+
+```python
+from vectorize_client.models.googledrive_config import GOOGLEDRIVEConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEConfig from a JSON string
+googledrive_config_instance = GOOGLEDRIVEConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEConfig.to_json())
+
+# convert the object into a dict
+googledrive_config_dict = googledrive_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEConfig from a dict
+googledrive_config_from_dict = GOOGLEDRIVEConfig.from_dict(googledrive_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEOAUTHAuthConfig.md b/docs/GOOGLEDRIVEOAUTHAuthConfig.md
new file mode 100644
index 0000000..b6dec7a
--- /dev/null
+++ b/docs/GOOGLEDRIVEOAUTHAuthConfig.md
@@ -0,0 +1,34 @@
+# GOOGLEDRIVEOAUTHAuthConfig
+
+Authentication configuration for Google Drive OAuth
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**authorized_user** | **str** | Authorized User | [optional]
+**selection_details** | **str** | Connect Google Drive to Vectorize. Example: Authorize |
+**edited_users** | **str** | | [optional]
+**reconnect_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.googledriveoauth_auth_config import GOOGLEDRIVEOAUTHAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEOAUTHAuthConfig from a JSON string
+googledriveoauth_auth_config_instance = GOOGLEDRIVEOAUTHAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEOAUTHAuthConfig.to_json())
+
+# convert the object into a dict
+googledriveoauth_auth_config_dict = googledriveoauth_auth_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEOAUTHAuthConfig from a dict
+googledriveoauth_auth_config_from_dict = GOOGLEDRIVEOAUTHAuthConfig.from_dict(googledriveoauth_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEOAUTHConfig.md b/docs/GOOGLEDRIVEOAUTHConfig.md
new file mode 100644
index 0000000..4c03f70
--- /dev/null
+++ b/docs/GOOGLEDRIVEOAUTHConfig.md
@@ -0,0 +1,31 @@
+# GOOGLEDRIVEOAUTHConfig
+
+Configuration for Google Drive OAuth connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Polling Interval (seconds). Example: Enter polling interval in seconds | [optional] [default to 5]
+
+## Example
+
+```python
+from vectorize_client.models.googledriveoauth_config import GOOGLEDRIVEOAUTHConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEOAUTHConfig from a JSON string
+googledriveoauth_config_instance = GOOGLEDRIVEOAUTHConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEOAUTHConfig.to_json())
+
+# convert the object into a dict
+googledriveoauth_config_dict = googledriveoauth_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEOAUTHConfig from a dict
+googledriveoauth_config_from_dict = GOOGLEDRIVEOAUTHConfig.from_dict(googledriveoauth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEOAUTHMULTIAuthConfig.md b/docs/GOOGLEDRIVEOAUTHMULTIAuthConfig.md
new file mode 100644
index 0000000..b298062
--- /dev/null
+++ b/docs/GOOGLEDRIVEOAUTHMULTIAuthConfig.md
@@ -0,0 +1,33 @@
+# GOOGLEDRIVEOAUTHMULTIAuthConfig
+
+Authentication configuration for Google Drive Multi-User (Vectorize)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**authorized_users** | **str** | Authorized Users | [optional]
+**edited_users** | **str** | | [optional]
+**deleted_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.googledriveoauthmulti_auth_config import GOOGLEDRIVEOAUTHMULTIAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEOAUTHMULTIAuthConfig from a JSON string
+googledriveoauthmulti_auth_config_instance = GOOGLEDRIVEOAUTHMULTIAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEOAUTHMULTIAuthConfig.to_json())
+
+# convert the object into a dict
+googledriveoauthmulti_auth_config_dict = googledriveoauthmulti_auth_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEOAUTHMULTIAuthConfig from a dict
+googledriveoauthmulti_auth_config_from_dict = GOOGLEDRIVEOAUTHMULTIAuthConfig.from_dict(googledriveoauthmulti_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.md b/docs/GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.md
new file mode 100644
index 0000000..d227eeb
--- /dev/null
+++ b/docs/GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.md
@@ -0,0 +1,35 @@
+# GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig
+
+Authentication configuration for Google Drive Multi-User (White Label)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**oauth2_client_id** | **str** | OAuth2 Client Id. Example: Enter Client Id |
+**oauth2_client_secret** | **str** | OAuth2 Client Secret. Example: Enter Client Secret |
+**authorized_users** | **str** | Authorized Users | [optional]
+**edited_users** | **str** | | [optional]
+**deleted_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.googledriveoauthmulticustom_auth_config import GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig from a JSON string
+googledriveoauthmulticustom_auth_config_instance = GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.to_json())
+
+# convert the object into a dict
+googledriveoauthmulticustom_auth_config_dict = googledriveoauthmulticustom_auth_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig from a dict
+googledriveoauthmulticustom_auth_config_from_dict = GOOGLEDRIVEOAUTHMULTICUSTOMAuthConfig.from_dict(googledriveoauthmulticustom_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEOAUTHMULTICUSTOMConfig.md b/docs/GOOGLEDRIVEOAUTHMULTICUSTOMConfig.md
new file mode 100644
index 0000000..e4155ec
--- /dev/null
+++ b/docs/GOOGLEDRIVEOAUTHMULTICUSTOMConfig.md
@@ -0,0 +1,31 @@
+# GOOGLEDRIVEOAUTHMULTICUSTOMConfig
+
+Configuration for Google Drive Multi-User (White Label) connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Polling Interval (seconds). Example: Enter polling interval in seconds | [optional] [default to 5]
+
+## Example
+
+```python
+from vectorize_client.models.googledriveoauthmulticustom_config import GOOGLEDRIVEOAUTHMULTICUSTOMConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEOAUTHMULTICUSTOMConfig from a JSON string
+googledriveoauthmulticustom_config_instance = GOOGLEDRIVEOAUTHMULTICUSTOMConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEOAUTHMULTICUSTOMConfig.to_json())
+
+# convert the object into a dict
+googledriveoauthmulticustom_config_dict = googledriveoauthmulticustom_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEOAUTHMULTICUSTOMConfig from a dict
+googledriveoauthmulticustom_config_from_dict = GOOGLEDRIVEOAUTHMULTICUSTOMConfig.from_dict(googledriveoauthmulticustom_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GOOGLEDRIVEOAUTHMULTIConfig.md b/docs/GOOGLEDRIVEOAUTHMULTIConfig.md
new file mode 100644
index 0000000..110628b
--- /dev/null
+++ b/docs/GOOGLEDRIVEOAUTHMULTIConfig.md
@@ -0,0 +1,31 @@
+# GOOGLEDRIVEOAUTHMULTIConfig
+
+Configuration for Google Drive Multi-User (Vectorize) connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Polling Interval (seconds). Example: Enter polling interval in seconds | [optional] [default to 5]
+
+## Example
+
+```python
+from vectorize_client.models.googledriveoauthmulti_config import GOOGLEDRIVEOAUTHMULTIConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GOOGLEDRIVEOAUTHMULTIConfig from a JSON string
+googledriveoauthmulti_config_instance = GOOGLEDRIVEOAUTHMULTIConfig.from_json(json)
+# print the JSON string representation of the object
+print(GOOGLEDRIVEOAUTHMULTIConfig.to_json())
+
+# convert the object into a dict
+googledriveoauthmulti_config_dict = googledriveoauthmulti_config_instance.to_dict()
+# create an instance of GOOGLEDRIVEOAUTHMULTIConfig from a dict
+googledriveoauthmulti_config_from_dict = GOOGLEDRIVEOAUTHMULTIConfig.from_dict(googledriveoauthmulti_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetAIPlatformConnectors200Response.md b/docs/GetAIPlatformConnectors200Response.md
new file mode 100644
index 0000000..b4a923e
--- /dev/null
+++ b/docs/GetAIPlatformConnectors200Response.md
@@ -0,0 +1,29 @@
+# GetAIPlatformConnectors200Response
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**ai_platform_connectors** | [**List[AIPlatform]**](AIPlatform.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_ai_platform_connectors200_response import GetAIPlatformConnectors200Response
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetAIPlatformConnectors200Response from a JSON string
+get_ai_platform_connectors200_response_instance = GetAIPlatformConnectors200Response.from_json(json)
+# print the JSON string representation of the object
+print(GetAIPlatformConnectors200Response.to_json())
+
+# convert the object into a dict
+get_ai_platform_connectors200_response_dict = get_ai_platform_connectors200_response_instance.to_dict()
+# create an instance of GetAIPlatformConnectors200Response from a dict
+get_ai_platform_connectors200_response_from_dict = GetAIPlatformConnectors200Response.from_dict(get_ai_platform_connectors200_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetDeepResearchResponse.md b/docs/GetDeepResearchResponse.md
new file mode 100644
index 0000000..6d22f6e
--- /dev/null
+++ b/docs/GetDeepResearchResponse.md
@@ -0,0 +1,30 @@
+# GetDeepResearchResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**ready** | **bool** | |
+**data** | [**DeepResearchResult**](DeepResearchResult.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.get_deep_research_response import GetDeepResearchResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetDeepResearchResponse from a JSON string
+get_deep_research_response_instance = GetDeepResearchResponse.from_json(json)
+# print the JSON string representation of the object
+print(GetDeepResearchResponse.to_json())
+
+# convert the object into a dict
+get_deep_research_response_dict = get_deep_research_response_instance.to_dict()
+# create an instance of GetDeepResearchResponse from a dict
+get_deep_research_response_from_dict = GetDeepResearchResponse.from_dict(get_deep_research_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetDestinationConnectors200Response.md b/docs/GetDestinationConnectors200Response.md
new file mode 100644
index 0000000..a02bf9f
--- /dev/null
+++ b/docs/GetDestinationConnectors200Response.md
@@ -0,0 +1,29 @@
+# GetDestinationConnectors200Response
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**destination_connectors** | [**List[DestinationConnector]**](DestinationConnector.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_destination_connectors200_response import GetDestinationConnectors200Response
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetDestinationConnectors200Response from a JSON string
+get_destination_connectors200_response_instance = GetDestinationConnectors200Response.from_json(json)
+# print the JSON string representation of the object
+print(GetDestinationConnectors200Response.to_json())
+
+# convert the object into a dict
+get_destination_connectors200_response_dict = get_destination_connectors200_response_instance.to_dict()
+# create an instance of GetDestinationConnectors200Response from a dict
+get_destination_connectors200_response_from_dict = GetDestinationConnectors200Response.from_dict(get_destination_connectors200_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetPipelineEventsResponse.md b/docs/GetPipelineEventsResponse.md
new file mode 100644
index 0000000..f738951
--- /dev/null
+++ b/docs/GetPipelineEventsResponse.md
@@ -0,0 +1,31 @@
+# GetPipelineEventsResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**next_token** | **str** | | [optional]
+**data** | [**List[PipelineEvents]**](PipelineEvents.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_pipeline_events_response import GetPipelineEventsResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetPipelineEventsResponse from a JSON string
+get_pipeline_events_response_instance = GetPipelineEventsResponse.from_json(json)
+# print the JSON string representation of the object
+print(GetPipelineEventsResponse.to_json())
+
+# convert the object into a dict
+get_pipeline_events_response_dict = get_pipeline_events_response_instance.to_dict()
+# create an instance of GetPipelineEventsResponse from a dict
+get_pipeline_events_response_from_dict = GetPipelineEventsResponse.from_dict(get_pipeline_events_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetPipelineMetricsResponse.md b/docs/GetPipelineMetricsResponse.md
new file mode 100644
index 0000000..663dd4a
--- /dev/null
+++ b/docs/GetPipelineMetricsResponse.md
@@ -0,0 +1,30 @@
+# GetPipelineMetricsResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**List[PipelineMetrics]**](PipelineMetrics.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_pipeline_metrics_response import GetPipelineMetricsResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetPipelineMetricsResponse from a JSON string
+get_pipeline_metrics_response_instance = GetPipelineMetricsResponse.from_json(json)
+# print the JSON string representation of the object
+print(GetPipelineMetricsResponse.to_json())
+
+# convert the object into a dict
+get_pipeline_metrics_response_dict = get_pipeline_metrics_response_instance.to_dict()
+# create an instance of GetPipelineMetricsResponse from a dict
+get_pipeline_metrics_response_from_dict = GetPipelineMetricsResponse.from_dict(get_pipeline_metrics_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetPipelineResponse.md b/docs/GetPipelineResponse.md
new file mode 100644
index 0000000..f7bdbaa
--- /dev/null
+++ b/docs/GetPipelineResponse.md
@@ -0,0 +1,30 @@
+# GetPipelineResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**PipelineSummary**](PipelineSummary.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_pipeline_response import GetPipelineResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetPipelineResponse from a JSON string
+get_pipeline_response_instance = GetPipelineResponse.from_json(json)
+# print the JSON string representation of the object
+print(GetPipelineResponse.to_json())
+
+# convert the object into a dict
+get_pipeline_response_dict = get_pipeline_response_instance.to_dict()
+# create an instance of GetPipelineResponse from a dict
+get_pipeline_response_from_dict = GetPipelineResponse.from_dict(get_pipeline_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetPipelines400Response.md b/docs/GetPipelines400Response.md
new file mode 100644
index 0000000..a6bf197
--- /dev/null
+++ b/docs/GetPipelines400Response.md
@@ -0,0 +1,32 @@
+# GetPipelines400Response
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**error** | **str** | |
+**details** | **str** | | [optional]
+**failed_updates** | **List[str]** | | [optional]
+**successful_updates** | **List[str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.get_pipelines400_response import GetPipelines400Response
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetPipelines400Response from a JSON string
+get_pipelines400_response_instance = GetPipelines400Response.from_json(json)
+# print the JSON string representation of the object
+print(GetPipelines400Response.to_json())
+
+# convert the object into a dict
+get_pipelines400_response_dict = get_pipelines400_response_instance.to_dict()
+# create an instance of GetPipelines400Response from a dict
+get_pipelines400_response_from_dict = GetPipelines400Response.from_dict(get_pipelines400_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetPipelinesResponse.md b/docs/GetPipelinesResponse.md
new file mode 100644
index 0000000..190abaf
--- /dev/null
+++ b/docs/GetPipelinesResponse.md
@@ -0,0 +1,30 @@
+# GetPipelinesResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**List[PipelineListSummary]**](PipelineListSummary.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_pipelines_response import GetPipelinesResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetPipelinesResponse from a JSON string
+get_pipelines_response_instance = GetPipelinesResponse.from_json(json)
+# print the JSON string representation of the object
+print(GetPipelinesResponse.to_json())
+
+# convert the object into a dict
+get_pipelines_response_dict = get_pipelines_response_instance.to_dict()
+# create an instance of GetPipelinesResponse from a dict
+get_pipelines_response_from_dict = GetPipelinesResponse.from_dict(get_pipelines_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetSourceConnectors200Response.md b/docs/GetSourceConnectors200Response.md
new file mode 100644
index 0000000..485f7f2
--- /dev/null
+++ b/docs/GetSourceConnectors200Response.md
@@ -0,0 +1,29 @@
+# GetSourceConnectors200Response
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**source_connectors** | [**List[SourceConnector]**](SourceConnector.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_source_connectors200_response import GetSourceConnectors200Response
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetSourceConnectors200Response from a JSON string
+get_source_connectors200_response_instance = GetSourceConnectors200Response.from_json(json)
+# print the JSON string representation of the object
+print(GetSourceConnectors200Response.to_json())
+
+# convert the object into a dict
+get_source_connectors200_response_dict = get_source_connectors200_response_instance.to_dict()
+# create an instance of GetSourceConnectors200Response from a dict
+get_source_connectors200_response_from_dict = GetSourceConnectors200Response.from_dict(get_source_connectors200_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GetUploadFilesResponse.md b/docs/GetUploadFilesResponse.md
new file mode 100644
index 0000000..849dcab
--- /dev/null
+++ b/docs/GetUploadFilesResponse.md
@@ -0,0 +1,30 @@
+# GetUploadFilesResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**files** | [**List[UploadFile]**](UploadFile.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.get_upload_files_response import GetUploadFilesResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GetUploadFilesResponse from a JSON string
+get_upload_files_response_instance = GetUploadFilesResponse.from_json(json)
+# print the JSON string representation of the object
+print(GetUploadFilesResponse.to_json())
+
+# convert the object into a dict
+get_upload_files_response_dict = get_upload_files_response_instance.to_dict()
+# create an instance of GetUploadFilesResponse from a dict
+get_upload_files_response_from_dict = GetUploadFilesResponse.from_dict(get_upload_files_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Github.md b/docs/Github.md
new file mode 100644
index 0000000..fe77e2c
--- /dev/null
+++ b/docs/Github.md
@@ -0,0 +1,31 @@
+# Github
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"GITHUB\") |
+**config** | [**GITHUBConfig**](GITHUBConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.github import Github
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Github from a JSON string
+github_instance = Github.from_json(json)
+# print the JSON string representation of the object
+print(Github.to_json())
+
+# convert the object into a dict
+github_dict = github_instance.to_dict()
+# create an instance of Github from a dict
+github_from_dict = Github.from_dict(github_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Github1.md b/docs/Github1.md
new file mode 100644
index 0000000..4ea1dde
--- /dev/null
+++ b/docs/Github1.md
@@ -0,0 +1,29 @@
+# Github1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**GITHUBConfig**](GITHUBConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.github1 import Github1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Github1 from a JSON string
+github1_instance = Github1.from_json(json)
+# print the JSON string representation of the object
+print(Github1.to_json())
+
+# convert the object into a dict
+github1_dict = github1_instance.to_dict()
+# create an instance of Github1 from a dict
+github1_from_dict = Github1.from_dict(github1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Github2.md b/docs/Github2.md
new file mode 100644
index 0000000..2aa0363
--- /dev/null
+++ b/docs/Github2.md
@@ -0,0 +1,30 @@
+# Github2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"GITHUB\") |
+
+## Example
+
+```python
+from vectorize_client.models.github2 import Github2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Github2 from a JSON string
+github2_instance = Github2.from_json(json)
+# print the JSON string representation of the object
+print(Github2.to_json())
+
+# convert the object into a dict
+github2_dict = github2_instance.to_dict()
+# create an instance of Github2 from a dict
+github2_from_dict = Github2.from_dict(github2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleCloudStorage.md b/docs/GoogleCloudStorage.md
new file mode 100644
index 0000000..b805f91
--- /dev/null
+++ b/docs/GoogleCloudStorage.md
@@ -0,0 +1,31 @@
+# GoogleCloudStorage
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"GCS\") |
+**config** | [**GCSConfig**](GCSConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.google_cloud_storage import GoogleCloudStorage
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleCloudStorage from a JSON string
+google_cloud_storage_instance = GoogleCloudStorage.from_json(json)
+# print the JSON string representation of the object
+print(GoogleCloudStorage.to_json())
+
+# convert the object into a dict
+google_cloud_storage_dict = google_cloud_storage_instance.to_dict()
+# create an instance of GoogleCloudStorage from a dict
+google_cloud_storage_from_dict = GoogleCloudStorage.from_dict(google_cloud_storage_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleCloudStorage1.md b/docs/GoogleCloudStorage1.md
new file mode 100644
index 0000000..927c4d7
--- /dev/null
+++ b/docs/GoogleCloudStorage1.md
@@ -0,0 +1,29 @@
+# GoogleCloudStorage1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**GCSConfig**](GCSConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.google_cloud_storage1 import GoogleCloudStorage1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleCloudStorage1 from a JSON string
+google_cloud_storage1_instance = GoogleCloudStorage1.from_json(json)
+# print the JSON string representation of the object
+print(GoogleCloudStorage1.to_json())
+
+# convert the object into a dict
+google_cloud_storage1_dict = google_cloud_storage1_instance.to_dict()
+# create an instance of GoogleCloudStorage1 from a dict
+google_cloud_storage1_from_dict = GoogleCloudStorage1.from_dict(google_cloud_storage1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleCloudStorage2.md b/docs/GoogleCloudStorage2.md
new file mode 100644
index 0000000..4d03bc7
--- /dev/null
+++ b/docs/GoogleCloudStorage2.md
@@ -0,0 +1,30 @@
+# GoogleCloudStorage2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"GCS\") |
+
+## Example
+
+```python
+from vectorize_client.models.google_cloud_storage2 import GoogleCloudStorage2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleCloudStorage2 from a JSON string
+google_cloud_storage2_instance = GoogleCloudStorage2.from_json(json)
+# print the JSON string representation of the object
+print(GoogleCloudStorage2.to_json())
+
+# convert the object into a dict
+google_cloud_storage2_dict = google_cloud_storage2_instance.to_dict()
+# create an instance of GoogleCloudStorage2 from a dict
+google_cloud_storage2_from_dict = GoogleCloudStorage2.from_dict(google_cloud_storage2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDrive.md b/docs/GoogleDrive.md
new file mode 100644
index 0000000..f5b71c1
--- /dev/null
+++ b/docs/GoogleDrive.md
@@ -0,0 +1,31 @@
+# GoogleDrive
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE\") |
+**config** | [**GOOGLEDRIVEConfig**](GOOGLEDRIVEConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive import GoogleDrive
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDrive from a JSON string
+google_drive_instance = GoogleDrive.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDrive.to_json())
+
+# convert the object into a dict
+google_drive_dict = google_drive_instance.to_dict()
+# create an instance of GoogleDrive from a dict
+google_drive_from_dict = GoogleDrive.from_dict(google_drive_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDrive1.md b/docs/GoogleDrive1.md
new file mode 100644
index 0000000..d615b52
--- /dev/null
+++ b/docs/GoogleDrive1.md
@@ -0,0 +1,29 @@
+# GoogleDrive1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**GOOGLEDRIVEConfig**](GOOGLEDRIVEConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.google_drive1 import GoogleDrive1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDrive1 from a JSON string
+google_drive1_instance = GoogleDrive1.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDrive1.to_json())
+
+# convert the object into a dict
+google_drive1_dict = google_drive1_instance.to_dict()
+# create an instance of GoogleDrive1 from a dict
+google_drive1_from_dict = GoogleDrive1.from_dict(google_drive1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDrive2.md b/docs/GoogleDrive2.md
new file mode 100644
index 0000000..be315cc
--- /dev/null
+++ b/docs/GoogleDrive2.md
@@ -0,0 +1,30 @@
+# GoogleDrive2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE\") |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive2 import GoogleDrive2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDrive2 from a JSON string
+google_drive2_instance = GoogleDrive2.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDrive2.to_json())
+
+# convert the object into a dict
+google_drive2_dict = google_drive2_instance.to_dict()
+# create an instance of GoogleDrive2 from a dict
+google_drive2_from_dict = GoogleDrive2.from_dict(google_drive2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOAuth.md b/docs/GoogleDriveOAuth.md
new file mode 100644
index 0000000..1a0145f
--- /dev/null
+++ b/docs/GoogleDriveOAuth.md
@@ -0,0 +1,31 @@
+# GoogleDriveOAuth
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE_OAUTH\") |
+**config** | [**GOOGLEDRIVEOAUTHConfig**](GOOGLEDRIVEOAUTHConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_o_auth import GoogleDriveOAuth
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOAuth from a JSON string
+google_drive_o_auth_instance = GoogleDriveOAuth.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOAuth.to_json())
+
+# convert the object into a dict
+google_drive_o_auth_dict = google_drive_o_auth_instance.to_dict()
+# create an instance of GoogleDriveOAuth from a dict
+google_drive_o_auth_from_dict = GoogleDriveOAuth.from_dict(google_drive_o_auth_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOAuth1.md b/docs/GoogleDriveOAuth1.md
new file mode 100644
index 0000000..1ffd744
--- /dev/null
+++ b/docs/GoogleDriveOAuth1.md
@@ -0,0 +1,29 @@
+# GoogleDriveOAuth1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**GOOGLEDRIVEOAUTHConfig**](GOOGLEDRIVEOAUTHConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_o_auth1 import GoogleDriveOAuth1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOAuth1 from a JSON string
+google_drive_o_auth1_instance = GoogleDriveOAuth1.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOAuth1.to_json())
+
+# convert the object into a dict
+google_drive_o_auth1_dict = google_drive_o_auth1_instance.to_dict()
+# create an instance of GoogleDriveOAuth1 from a dict
+google_drive_o_auth1_from_dict = GoogleDriveOAuth1.from_dict(google_drive_o_auth1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOAuth2.md b/docs/GoogleDriveOAuth2.md
new file mode 100644
index 0000000..d93e0e9
--- /dev/null
+++ b/docs/GoogleDriveOAuth2.md
@@ -0,0 +1,30 @@
+# GoogleDriveOAuth2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE_OAUTH\") |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_o_auth2 import GoogleDriveOAuth2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOAuth2 from a JSON string
+google_drive_o_auth2_instance = GoogleDriveOAuth2.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOAuth2.to_json())
+
+# convert the object into a dict
+google_drive_o_auth2_dict = google_drive_o_auth2_instance.to_dict()
+# create an instance of GoogleDriveOAuth2 from a dict
+google_drive_o_auth2_from_dict = GoogleDriveOAuth2.from_dict(google_drive_o_auth2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOauthMulti.md b/docs/GoogleDriveOauthMulti.md
new file mode 100644
index 0000000..03374b3
--- /dev/null
+++ b/docs/GoogleDriveOauthMulti.md
@@ -0,0 +1,31 @@
+# GoogleDriveOauthMulti
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE_OAUTH_MULTI\") |
+**config** | [**GOOGLEDRIVEOAUTHMULTIConfig**](GOOGLEDRIVEOAUTHMULTIConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_oauth_multi import GoogleDriveOauthMulti
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOauthMulti from a JSON string
+google_drive_oauth_multi_instance = GoogleDriveOauthMulti.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOauthMulti.to_json())
+
+# convert the object into a dict
+google_drive_oauth_multi_dict = google_drive_oauth_multi_instance.to_dict()
+# create an instance of GoogleDriveOauthMulti from a dict
+google_drive_oauth_multi_from_dict = GoogleDriveOauthMulti.from_dict(google_drive_oauth_multi_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOauthMulti1.md b/docs/GoogleDriveOauthMulti1.md
new file mode 100644
index 0000000..371b3db
--- /dev/null
+++ b/docs/GoogleDriveOauthMulti1.md
@@ -0,0 +1,29 @@
+# GoogleDriveOauthMulti1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**GOOGLEDRIVEOAUTHMULTIConfig**](GOOGLEDRIVEOAUTHMULTIConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_oauth_multi1 import GoogleDriveOauthMulti1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOauthMulti1 from a JSON string
+google_drive_oauth_multi1_instance = GoogleDriveOauthMulti1.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOauthMulti1.to_json())
+
+# convert the object into a dict
+google_drive_oauth_multi1_dict = google_drive_oauth_multi1_instance.to_dict()
+# create an instance of GoogleDriveOauthMulti1 from a dict
+google_drive_oauth_multi1_from_dict = GoogleDriveOauthMulti1.from_dict(google_drive_oauth_multi1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOauthMulti2.md b/docs/GoogleDriveOauthMulti2.md
new file mode 100644
index 0000000..927bd94
--- /dev/null
+++ b/docs/GoogleDriveOauthMulti2.md
@@ -0,0 +1,30 @@
+# GoogleDriveOauthMulti2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE_OAUTH_MULTI\") |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_oauth_multi2 import GoogleDriveOauthMulti2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOauthMulti2 from a JSON string
+google_drive_oauth_multi2_instance = GoogleDriveOauthMulti2.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOauthMulti2.to_json())
+
+# convert the object into a dict
+google_drive_oauth_multi2_dict = google_drive_oauth_multi2_instance.to_dict()
+# create an instance of GoogleDriveOauthMulti2 from a dict
+google_drive_oauth_multi2_from_dict = GoogleDriveOauthMulti2.from_dict(google_drive_oauth_multi2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOauthMultiCustom.md b/docs/GoogleDriveOauthMultiCustom.md
new file mode 100644
index 0000000..138ae3d
--- /dev/null
+++ b/docs/GoogleDriveOauthMultiCustom.md
@@ -0,0 +1,31 @@
+# GoogleDriveOauthMultiCustom
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE_OAUTH_MULTI_CUSTOM\") |
+**config** | [**GOOGLEDRIVEOAUTHMULTICUSTOMConfig**](GOOGLEDRIVEOAUTHMULTICUSTOMConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_oauth_multi_custom import GoogleDriveOauthMultiCustom
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOauthMultiCustom from a JSON string
+google_drive_oauth_multi_custom_instance = GoogleDriveOauthMultiCustom.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOauthMultiCustom.to_json())
+
+# convert the object into a dict
+google_drive_oauth_multi_custom_dict = google_drive_oauth_multi_custom_instance.to_dict()
+# create an instance of GoogleDriveOauthMultiCustom from a dict
+google_drive_oauth_multi_custom_from_dict = GoogleDriveOauthMultiCustom.from_dict(google_drive_oauth_multi_custom_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOauthMultiCustom1.md b/docs/GoogleDriveOauthMultiCustom1.md
new file mode 100644
index 0000000..e14a1b0
--- /dev/null
+++ b/docs/GoogleDriveOauthMultiCustom1.md
@@ -0,0 +1,29 @@
+# GoogleDriveOauthMultiCustom1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**GOOGLEDRIVEOAUTHMULTICUSTOMConfig**](GOOGLEDRIVEOAUTHMULTICUSTOMConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_oauth_multi_custom1 import GoogleDriveOauthMultiCustom1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOauthMultiCustom1 from a JSON string
+google_drive_oauth_multi_custom1_instance = GoogleDriveOauthMultiCustom1.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOauthMultiCustom1.to_json())
+
+# convert the object into a dict
+google_drive_oauth_multi_custom1_dict = google_drive_oauth_multi_custom1_instance.to_dict()
+# create an instance of GoogleDriveOauthMultiCustom1 from a dict
+google_drive_oauth_multi_custom1_from_dict = GoogleDriveOauthMultiCustom1.from_dict(google_drive_oauth_multi_custom1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/GoogleDriveOauthMultiCustom2.md b/docs/GoogleDriveOauthMultiCustom2.md
new file mode 100644
index 0000000..8d20d4f
--- /dev/null
+++ b/docs/GoogleDriveOauthMultiCustom2.md
@@ -0,0 +1,30 @@
+# GoogleDriveOauthMultiCustom2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"GOOGLE_DRIVE_OAUTH_MULTI_CUSTOM\") |
+
+## Example
+
+```python
+from vectorize_client.models.google_drive_oauth_multi_custom2 import GoogleDriveOauthMultiCustom2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of GoogleDriveOauthMultiCustom2 from a JSON string
+google_drive_oauth_multi_custom2_instance = GoogleDriveOauthMultiCustom2.from_json(json)
+# print the JSON string representation of the object
+print(GoogleDriveOauthMultiCustom2.to_json())
+
+# convert the object into a dict
+google_drive_oauth_multi_custom2_dict = google_drive_oauth_multi_custom2_instance.to_dict()
+# create an instance of GoogleDriveOauthMultiCustom2 from a dict
+google_drive_oauth_multi_custom2_from_dict = GoogleDriveOauthMultiCustom2.from_dict(google_drive_oauth_multi_custom2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/INTERCOMAuthConfig.md b/docs/INTERCOMAuthConfig.md
new file mode 100644
index 0000000..d1292b1
--- /dev/null
+++ b/docs/INTERCOMAuthConfig.md
@@ -0,0 +1,31 @@
+# INTERCOMAuthConfig
+
+Authentication configuration for Intercom
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**token** | **str** | Access Token. Example: Authorize Intercom Access |
+
+## Example
+
+```python
+from vectorize_client.models.intercom_auth_config import INTERCOMAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of INTERCOMAuthConfig from a JSON string
+intercom_auth_config_instance = INTERCOMAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(INTERCOMAuthConfig.to_json())
+
+# convert the object into a dict
+intercom_auth_config_dict = intercom_auth_config_instance.to_dict()
+# create an instance of INTERCOMAuthConfig from a dict
+intercom_auth_config_from_dict = INTERCOMAuthConfig.from_dict(intercom_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/INTERCOMConfig.md b/docs/INTERCOMConfig.md
new file mode 100644
index 0000000..9744de9
--- /dev/null
+++ b/docs/INTERCOMConfig.md
@@ -0,0 +1,32 @@
+# INTERCOMConfig
+
+Configuration for Intercom connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**created_at** | **date** | Created After. Filter for conversation created after this date. Example: Enter a date: Example 2012-12-31 |
+**updated_at** | **date** | Updated After. Filter for conversation updated after this date. Example: Enter a date: Example 2012-12-31 | [optional]
+**state** | **List[str]** | State | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.intercom_config import INTERCOMConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of INTERCOMConfig from a JSON string
+intercom_config_instance = INTERCOMConfig.from_json(json)
+# print the JSON string representation of the object
+print(INTERCOMConfig.to_json())
+
+# convert the object into a dict
+intercom_config_dict = intercom_config_instance.to_dict()
+# create an instance of INTERCOMConfig from a dict
+intercom_config_from_dict = INTERCOMConfig.from_dict(intercom_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Intercom.md b/docs/Intercom.md
new file mode 100644
index 0000000..89a4732
--- /dev/null
+++ b/docs/Intercom.md
@@ -0,0 +1,31 @@
+# Intercom
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"INTERCOM\") |
+**config** | [**INTERCOMConfig**](INTERCOMConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.intercom import Intercom
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Intercom from a JSON string
+intercom_instance = Intercom.from_json(json)
+# print the JSON string representation of the object
+print(Intercom.to_json())
+
+# convert the object into a dict
+intercom_dict = intercom_instance.to_dict()
+# create an instance of Intercom from a dict
+intercom_from_dict = Intercom.from_dict(intercom_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Intercom1.md b/docs/Intercom1.md
new file mode 100644
index 0000000..0f97340
--- /dev/null
+++ b/docs/Intercom1.md
@@ -0,0 +1,29 @@
+# Intercom1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**INTERCOMConfig**](INTERCOMConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.intercom1 import Intercom1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Intercom1 from a JSON string
+intercom1_instance = Intercom1.from_json(json)
+# print the JSON string representation of the object
+print(Intercom1.to_json())
+
+# convert the object into a dict
+intercom1_dict = intercom1_instance.to_dict()
+# create an instance of Intercom1 from a dict
+intercom1_from_dict = Intercom1.from_dict(intercom1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Intercom2.md b/docs/Intercom2.md
new file mode 100644
index 0000000..f94ed02
--- /dev/null
+++ b/docs/Intercom2.md
@@ -0,0 +1,30 @@
+# Intercom2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"INTERCOM\") |
+
+## Example
+
+```python
+from vectorize_client.models.intercom2 import Intercom2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Intercom2 from a JSON string
+intercom2_instance = Intercom2.from_json(json)
+# print the JSON string representation of the object
+print(Intercom2.to_json())
+
+# convert the object into a dict
+intercom2_dict = intercom2_instance.to_dict()
+# create an instance of Intercom2 from a dict
+intercom2_from_dict = Intercom2.from_dict(intercom2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/MILVUSAuthConfig.md b/docs/MILVUSAuthConfig.md
new file mode 100644
index 0000000..1975d62
--- /dev/null
+++ b/docs/MILVUSAuthConfig.md
@@ -0,0 +1,34 @@
+# MILVUSAuthConfig
+
+Authentication configuration for Milvus
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Milvus integration |
+**url** | **str** | Public Endpoint. Example: Enter your public endpoint for your Milvus cluster |
+**token** | **str** | Token. Example: Enter your cluster token or Username/Password | [optional]
+**username** | **str** | Username. Example: Enter your cluster Username | [optional]
+**password** | **str** | Password. Example: Enter your cluster Password | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.milvus_auth_config import MILVUSAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of MILVUSAuthConfig from a JSON string
+milvus_auth_config_instance = MILVUSAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(MILVUSAuthConfig.to_json())
+
+# convert the object into a dict
+milvus_auth_config_dict = milvus_auth_config_instance.to_dict()
+# create an instance of MILVUSAuthConfig from a dict
+milvus_auth_config_from_dict = MILVUSAuthConfig.from_dict(milvus_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/MILVUSConfig.md b/docs/MILVUSConfig.md
new file mode 100644
index 0000000..e3298ff
--- /dev/null
+++ b/docs/MILVUSConfig.md
@@ -0,0 +1,30 @@
+# MILVUSConfig
+
+Configuration for Milvus connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**collection** | **str** | Collection Name. Example: Enter collection name |
+
+## Example
+
+```python
+from vectorize_client.models.milvus_config import MILVUSConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of MILVUSConfig from a JSON string
+milvus_config_instance = MILVUSConfig.from_json(json)
+# print the JSON string representation of the object
+print(MILVUSConfig.to_json())
+
+# convert the object into a dict
+milvus_config_dict = milvus_config_instance.to_dict()
+# create an instance of MILVUSConfig from a dict
+milvus_config_from_dict = MILVUSConfig.from_dict(milvus_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/MetadataExtractionStrategy.md b/docs/MetadataExtractionStrategy.md
new file mode 100644
index 0000000..4cbf77b
--- /dev/null
+++ b/docs/MetadataExtractionStrategy.md
@@ -0,0 +1,30 @@
+# MetadataExtractionStrategy
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**schemas** | [**List[MetadataExtractionStrategySchema]**](MetadataExtractionStrategySchema.md) | | [optional]
+**infer_schema** | **bool** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.metadata_extraction_strategy import MetadataExtractionStrategy
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of MetadataExtractionStrategy from a JSON string
+metadata_extraction_strategy_instance = MetadataExtractionStrategy.from_json(json)
+# print the JSON string representation of the object
+print(MetadataExtractionStrategy.to_json())
+
+# convert the object into a dict
+metadata_extraction_strategy_dict = metadata_extraction_strategy_instance.to_dict()
+# create an instance of MetadataExtractionStrategy from a dict
+metadata_extraction_strategy_from_dict = MetadataExtractionStrategy.from_dict(metadata_extraction_strategy_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/MetadataExtractionStrategySchema.md b/docs/MetadataExtractionStrategySchema.md
new file mode 100644
index 0000000..cb237b3
--- /dev/null
+++ b/docs/MetadataExtractionStrategySchema.md
@@ -0,0 +1,30 @@
+# MetadataExtractionStrategySchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**var_schema** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.metadata_extraction_strategy_schema import MetadataExtractionStrategySchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of MetadataExtractionStrategySchema from a JSON string
+metadata_extraction_strategy_schema_instance = MetadataExtractionStrategySchema.from_json(json)
+# print the JSON string representation of the object
+print(MetadataExtractionStrategySchema.to_json())
+
+# convert the object into a dict
+metadata_extraction_strategy_schema_dict = metadata_extraction_strategy_schema_instance.to_dict()
+# create an instance of MetadataExtractionStrategySchema from a dict
+metadata_extraction_strategy_schema_from_dict = MetadataExtractionStrategySchema.from_dict(metadata_extraction_strategy_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Milvus.md b/docs/Milvus.md
new file mode 100644
index 0000000..1ccfa28
--- /dev/null
+++ b/docs/Milvus.md
@@ -0,0 +1,31 @@
+# Milvus
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"MILVUS\") |
+**config** | [**MILVUSConfig**](MILVUSConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.milvus import Milvus
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Milvus from a JSON string
+milvus_instance = Milvus.from_json(json)
+# print the JSON string representation of the object
+print(Milvus.to_json())
+
+# convert the object into a dict
+milvus_dict = milvus_instance.to_dict()
+# create an instance of Milvus from a dict
+milvus_from_dict = Milvus.from_dict(milvus_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Milvus1.md b/docs/Milvus1.md
new file mode 100644
index 0000000..c6f8dfa
--- /dev/null
+++ b/docs/Milvus1.md
@@ -0,0 +1,29 @@
+# Milvus1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**MILVUSConfig**](MILVUSConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.milvus1 import Milvus1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Milvus1 from a JSON string
+milvus1_instance = Milvus1.from_json(json)
+# print the JSON string representation of the object
+print(Milvus1.to_json())
+
+# convert the object into a dict
+milvus1_dict = milvus1_instance.to_dict()
+# create an instance of Milvus1 from a dict
+milvus1_from_dict = Milvus1.from_dict(milvus1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Milvus2.md b/docs/Milvus2.md
new file mode 100644
index 0000000..e69f8cd
--- /dev/null
+++ b/docs/Milvus2.md
@@ -0,0 +1,30 @@
+# Milvus2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"MILVUS\") |
+
+## Example
+
+```python
+from vectorize_client.models.milvus2 import Milvus2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Milvus2 from a JSON string
+milvus2_instance = Milvus2.from_json(json)
+# print the JSON string representation of the object
+print(Milvus2.to_json())
+
+# convert the object into a dict
+milvus2_dict = milvus2_instance.to_dict()
+# create an instance of Milvus2 from a dict
+milvus2_from_dict = Milvus2.from_dict(milvus2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/N8NConfig.md b/docs/N8NConfig.md
new file mode 100644
index 0000000..e1d6656
--- /dev/null
+++ b/docs/N8NConfig.md
@@ -0,0 +1,31 @@
+# N8NConfig
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**account** | **str** | |
+**webhook_path** | **str** | |
+**headers** | **Dict[str, str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.n8_n_config import N8NConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of N8NConfig from a JSON string
+n8_n_config_instance = N8NConfig.from_json(json)
+# print the JSON string representation of the object
+print(N8NConfig.to_json())
+
+# convert the object into a dict
+n8_n_config_dict = n8_n_config_instance.to_dict()
+# create an instance of N8NConfig from a dict
+n8_n_config_from_dict = N8NConfig.from_dict(n8_n_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NOTIONAuthConfig.md b/docs/NOTIONAuthConfig.md
new file mode 100644
index 0000000..62e56ec
--- /dev/null
+++ b/docs/NOTIONAuthConfig.md
@@ -0,0 +1,33 @@
+# NOTIONAuthConfig
+
+Authentication configuration for Notion
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**access_token** | **str** | Connect Notion to Vectorize - Note this will effect existing connections. test. Example: Authorize |
+**s3id** | **str** | | [optional]
+**edited_token** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.notion_auth_config import NOTIONAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NOTIONAuthConfig from a JSON string
+notion_auth_config_instance = NOTIONAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(NOTIONAuthConfig.to_json())
+
+# convert the object into a dict
+notion_auth_config_dict = notion_auth_config_instance.to_dict()
+# create an instance of NOTIONAuthConfig from a dict
+notion_auth_config_from_dict = NOTIONAuthConfig.from_dict(notion_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NOTIONConfig.md b/docs/NOTIONConfig.md
new file mode 100644
index 0000000..68f5d0a
--- /dev/null
+++ b/docs/NOTIONConfig.md
@@ -0,0 +1,34 @@
+# NOTIONConfig
+
+Configuration for Notion connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**select_resources** | **str** | Select Notion Resources |
+**database_ids** | **str** | Database IDs |
+**database_names** | **str** | Database Names |
+**page_ids** | **str** | Page IDs |
+**page_names** | **str** | Page Names |
+
+## Example
+
+```python
+from vectorize_client.models.notion_config import NOTIONConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NOTIONConfig from a JSON string
+notion_config_instance = NOTIONConfig.from_json(json)
+# print the JSON string representation of the object
+print(NOTIONConfig.to_json())
+
+# convert the object into a dict
+notion_config_dict = notion_config_instance.to_dict()
+# create an instance of NOTIONConfig from a dict
+notion_config_from_dict = NOTIONConfig.from_dict(notion_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NOTIONOAUTHMULTIAuthConfig.md b/docs/NOTIONOAUTHMULTIAuthConfig.md
new file mode 100644
index 0000000..dcd2610
--- /dev/null
+++ b/docs/NOTIONOAUTHMULTIAuthConfig.md
@@ -0,0 +1,33 @@
+# NOTIONOAUTHMULTIAuthConfig
+
+Authentication configuration for Notion Multi-User (Vectorize)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**authorized_users** | **str** | Authorized Users. Users who have authorized access to their Notion content | [optional]
+**edited_users** | **str** | | [optional]
+**deleted_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.notionoauthmulti_auth_config import NOTIONOAUTHMULTIAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NOTIONOAUTHMULTIAuthConfig from a JSON string
+notionoauthmulti_auth_config_instance = NOTIONOAUTHMULTIAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(NOTIONOAUTHMULTIAuthConfig.to_json())
+
+# convert the object into a dict
+notionoauthmulti_auth_config_dict = notionoauthmulti_auth_config_instance.to_dict()
+# create an instance of NOTIONOAUTHMULTIAuthConfig from a dict
+notionoauthmulti_auth_config_from_dict = NOTIONOAUTHMULTIAuthConfig.from_dict(notionoauthmulti_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NOTIONOAUTHMULTICUSTOMAuthConfig.md b/docs/NOTIONOAUTHMULTICUSTOMAuthConfig.md
new file mode 100644
index 0000000..b9edec0
--- /dev/null
+++ b/docs/NOTIONOAUTHMULTICUSTOMAuthConfig.md
@@ -0,0 +1,35 @@
+# NOTIONOAUTHMULTICUSTOMAuthConfig
+
+Authentication configuration for Notion Multi-User (White Label)
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**client_id** | **str** | Notion Client ID. Example: Enter Client ID |
+**client_secret** | **str** | Notion Client Secret. Example: Enter Client Secret |
+**authorized_users** | **str** | Authorized Users | [optional]
+**edited_users** | **str** | | [optional]
+**deleted_users** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.notionoauthmulticustom_auth_config import NOTIONOAUTHMULTICUSTOMAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NOTIONOAUTHMULTICUSTOMAuthConfig from a JSON string
+notionoauthmulticustom_auth_config_instance = NOTIONOAUTHMULTICUSTOMAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(NOTIONOAUTHMULTICUSTOMAuthConfig.to_json())
+
+# convert the object into a dict
+notionoauthmulticustom_auth_config_dict = notionoauthmulticustom_auth_config_instance.to_dict()
+# create an instance of NOTIONOAUTHMULTICUSTOMAuthConfig from a dict
+notionoauthmulticustom_auth_config_from_dict = NOTIONOAUTHMULTICUSTOMAuthConfig.from_dict(notionoauthmulticustom_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Notion.md b/docs/Notion.md
new file mode 100644
index 0000000..f900613
--- /dev/null
+++ b/docs/Notion.md
@@ -0,0 +1,31 @@
+# Notion
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"NOTION\") |
+**config** | [**NOTIONConfig**](NOTIONConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.notion import Notion
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Notion from a JSON string
+notion_instance = Notion.from_json(json)
+# print the JSON string representation of the object
+print(Notion.to_json())
+
+# convert the object into a dict
+notion_dict = notion_instance.to_dict()
+# create an instance of Notion from a dict
+notion_from_dict = Notion.from_dict(notion_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Notion1.md b/docs/Notion1.md
new file mode 100644
index 0000000..db7e677
--- /dev/null
+++ b/docs/Notion1.md
@@ -0,0 +1,29 @@
+# Notion1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**NOTIONConfig**](NOTIONConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.notion1 import Notion1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Notion1 from a JSON string
+notion1_instance = Notion1.from_json(json)
+# print the JSON string representation of the object
+print(Notion1.to_json())
+
+# convert the object into a dict
+notion1_dict = notion1_instance.to_dict()
+# create an instance of Notion1 from a dict
+notion1_from_dict = Notion1.from_dict(notion1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Notion2.md b/docs/Notion2.md
new file mode 100644
index 0000000..cd8b0c2
--- /dev/null
+++ b/docs/Notion2.md
@@ -0,0 +1,30 @@
+# Notion2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"NOTION\") |
+
+## Example
+
+```python
+from vectorize_client.models.notion2 import Notion2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Notion2 from a JSON string
+notion2_instance = Notion2.from_json(json)
+# print the JSON string representation of the object
+print(Notion2.to_json())
+
+# convert the object into a dict
+notion2_dict = notion2_instance.to_dict()
+# create an instance of Notion2 from a dict
+notion2_from_dict = Notion2.from_dict(notion2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NotionOauthMulti.md b/docs/NotionOauthMulti.md
new file mode 100644
index 0000000..a580908
--- /dev/null
+++ b/docs/NotionOauthMulti.md
@@ -0,0 +1,31 @@
+# NotionOauthMulti
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"NOTION_OAUTH_MULTI\") |
+**config** | [**NOTIONOAUTHMULTIAuthConfig**](NOTIONOAUTHMULTIAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.notion_oauth_multi import NotionOauthMulti
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NotionOauthMulti from a JSON string
+notion_oauth_multi_instance = NotionOauthMulti.from_json(json)
+# print the JSON string representation of the object
+print(NotionOauthMulti.to_json())
+
+# convert the object into a dict
+notion_oauth_multi_dict = notion_oauth_multi_instance.to_dict()
+# create an instance of NotionOauthMulti from a dict
+notion_oauth_multi_from_dict = NotionOauthMulti.from_dict(notion_oauth_multi_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NotionOauthMulti1.md b/docs/NotionOauthMulti1.md
new file mode 100644
index 0000000..483b2ae
--- /dev/null
+++ b/docs/NotionOauthMulti1.md
@@ -0,0 +1,29 @@
+# NotionOauthMulti1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**NOTIONOAUTHMULTIAuthConfig**](NOTIONOAUTHMULTIAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.notion_oauth_multi1 import NotionOauthMulti1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NotionOauthMulti1 from a JSON string
+notion_oauth_multi1_instance = NotionOauthMulti1.from_json(json)
+# print the JSON string representation of the object
+print(NotionOauthMulti1.to_json())
+
+# convert the object into a dict
+notion_oauth_multi1_dict = notion_oauth_multi1_instance.to_dict()
+# create an instance of NotionOauthMulti1 from a dict
+notion_oauth_multi1_from_dict = NotionOauthMulti1.from_dict(notion_oauth_multi1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NotionOauthMulti2.md b/docs/NotionOauthMulti2.md
new file mode 100644
index 0000000..e6df694
--- /dev/null
+++ b/docs/NotionOauthMulti2.md
@@ -0,0 +1,30 @@
+# NotionOauthMulti2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"NOTION_OAUTH_MULTI\") |
+
+## Example
+
+```python
+from vectorize_client.models.notion_oauth_multi2 import NotionOauthMulti2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NotionOauthMulti2 from a JSON string
+notion_oauth_multi2_instance = NotionOauthMulti2.from_json(json)
+# print the JSON string representation of the object
+print(NotionOauthMulti2.to_json())
+
+# convert the object into a dict
+notion_oauth_multi2_dict = notion_oauth_multi2_instance.to_dict()
+# create an instance of NotionOauthMulti2 from a dict
+notion_oauth_multi2_from_dict = NotionOauthMulti2.from_dict(notion_oauth_multi2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NotionOauthMultiCustom.md b/docs/NotionOauthMultiCustom.md
new file mode 100644
index 0000000..c7154d0
--- /dev/null
+++ b/docs/NotionOauthMultiCustom.md
@@ -0,0 +1,31 @@
+# NotionOauthMultiCustom
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"NOTION_OAUTH_MULTI_CUSTOM\") |
+**config** | [**NOTIONOAUTHMULTICUSTOMAuthConfig**](NOTIONOAUTHMULTICUSTOMAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.notion_oauth_multi_custom import NotionOauthMultiCustom
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NotionOauthMultiCustom from a JSON string
+notion_oauth_multi_custom_instance = NotionOauthMultiCustom.from_json(json)
+# print the JSON string representation of the object
+print(NotionOauthMultiCustom.to_json())
+
+# convert the object into a dict
+notion_oauth_multi_custom_dict = notion_oauth_multi_custom_instance.to_dict()
+# create an instance of NotionOauthMultiCustom from a dict
+notion_oauth_multi_custom_from_dict = NotionOauthMultiCustom.from_dict(notion_oauth_multi_custom_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NotionOauthMultiCustom1.md b/docs/NotionOauthMultiCustom1.md
new file mode 100644
index 0000000..ffb6a65
--- /dev/null
+++ b/docs/NotionOauthMultiCustom1.md
@@ -0,0 +1,29 @@
+# NotionOauthMultiCustom1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**NOTIONOAUTHMULTICUSTOMAuthConfig**](NOTIONOAUTHMULTICUSTOMAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.notion_oauth_multi_custom1 import NotionOauthMultiCustom1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NotionOauthMultiCustom1 from a JSON string
+notion_oauth_multi_custom1_instance = NotionOauthMultiCustom1.from_json(json)
+# print the JSON string representation of the object
+print(NotionOauthMultiCustom1.to_json())
+
+# convert the object into a dict
+notion_oauth_multi_custom1_dict = notion_oauth_multi_custom1_instance.to_dict()
+# create an instance of NotionOauthMultiCustom1 from a dict
+notion_oauth_multi_custom1_from_dict = NotionOauthMultiCustom1.from_dict(notion_oauth_multi_custom1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/NotionOauthMultiCustom2.md b/docs/NotionOauthMultiCustom2.md
new file mode 100644
index 0000000..d863e79
--- /dev/null
+++ b/docs/NotionOauthMultiCustom2.md
@@ -0,0 +1,30 @@
+# NotionOauthMultiCustom2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"NOTION_OAUTH_MULTI_CUSTOM\") |
+
+## Example
+
+```python
+from vectorize_client.models.notion_oauth_multi_custom2 import NotionOauthMultiCustom2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of NotionOauthMultiCustom2 from a JSON string
+notion_oauth_multi_custom2_instance = NotionOauthMultiCustom2.from_json(json)
+# print the JSON string representation of the object
+print(NotionOauthMultiCustom2.to_json())
+
+# convert the object into a dict
+notion_oauth_multi_custom2_dict = notion_oauth_multi_custom2_instance.to_dict()
+# create an instance of NotionOauthMultiCustom2 from a dict
+notion_oauth_multi_custom2_from_dict = NotionOauthMultiCustom2.from_dict(notion_oauth_multi_custom2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ONEDRIVEAuthConfig.md b/docs/ONEDRIVEAuthConfig.md
new file mode 100644
index 0000000..7f24b62
--- /dev/null
+++ b/docs/ONEDRIVEAuthConfig.md
@@ -0,0 +1,34 @@
+# ONEDRIVEAuthConfig
+
+Authentication configuration for OneDrive
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**ms_client_id** | **str** | Client Id. Example: Enter Client Id |
+**ms_tenant_id** | **str** | Tenant Id. Example: Enter Tenant Id |
+**ms_client_secret** | **str** | Client Secret. Example: Enter Client Secret |
+**users** | **str** | Users. Example: Enter users emails to import files from. Example: developer@vectorize.io |
+
+## Example
+
+```python
+from vectorize_client.models.onedrive_auth_config import ONEDRIVEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ONEDRIVEAuthConfig from a JSON string
+onedrive_auth_config_instance = ONEDRIVEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(ONEDRIVEAuthConfig.to_json())
+
+# convert the object into a dict
+onedrive_auth_config_dict = onedrive_auth_config_instance.to_dict()
+# create an instance of ONEDRIVEAuthConfig from a dict
+onedrive_auth_config_from_dict = ONEDRIVEAuthConfig.from_dict(onedrive_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ONEDRIVEConfig.md b/docs/ONEDRIVEConfig.md
new file mode 100644
index 0000000..23bb14e
--- /dev/null
+++ b/docs/ONEDRIVEConfig.md
@@ -0,0 +1,31 @@
+# ONEDRIVEConfig
+
+Configuration for OneDrive connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**path_prefix** | **str** | Read starting from this folder (optional). Example: Enter Folder path: /exampleFolder/subFolder | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.onedrive_config import ONEDRIVEConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ONEDRIVEConfig from a JSON string
+onedrive_config_instance = ONEDRIVEConfig.from_json(json)
+# print the JSON string representation of the object
+print(ONEDRIVEConfig.to_json())
+
+# convert the object into a dict
+onedrive_config_dict = onedrive_config_instance.to_dict()
+# create an instance of ONEDRIVEConfig from a dict
+onedrive_config_from_dict = ONEDRIVEConfig.from_dict(onedrive_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/OPENAIAuthConfig.md b/docs/OPENAIAuthConfig.md
new file mode 100644
index 0000000..df3e0e0
--- /dev/null
+++ b/docs/OPENAIAuthConfig.md
@@ -0,0 +1,31 @@
+# OPENAIAuthConfig
+
+Authentication configuration for OpenAI
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your OpenAI integration |
+**key** | **str** | API Key. Example: Enter your OpenAI API Key |
+
+## Example
+
+```python
+from vectorize_client.models.openai_auth_config import OPENAIAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of OPENAIAuthConfig from a JSON string
+openai_auth_config_instance = OPENAIAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(OPENAIAuthConfig.to_json())
+
+# convert the object into a dict
+openai_auth_config_dict = openai_auth_config_instance.to_dict()
+# create an instance of OPENAIAuthConfig from a dict
+openai_auth_config_from_dict = OPENAIAuthConfig.from_dict(openai_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/OneDrive.md b/docs/OneDrive.md
new file mode 100644
index 0000000..eb6c950
--- /dev/null
+++ b/docs/OneDrive.md
@@ -0,0 +1,31 @@
+# OneDrive
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"ONE_DRIVE\") |
+**config** | [**ONEDRIVEConfig**](ONEDRIVEConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.one_drive import OneDrive
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of OneDrive from a JSON string
+one_drive_instance = OneDrive.from_json(json)
+# print the JSON string representation of the object
+print(OneDrive.to_json())
+
+# convert the object into a dict
+one_drive_dict = one_drive_instance.to_dict()
+# create an instance of OneDrive from a dict
+one_drive_from_dict = OneDrive.from_dict(one_drive_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/OneDrive1.md b/docs/OneDrive1.md
new file mode 100644
index 0000000..e97c1f0
--- /dev/null
+++ b/docs/OneDrive1.md
@@ -0,0 +1,29 @@
+# OneDrive1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**ONEDRIVEConfig**](ONEDRIVEConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.one_drive1 import OneDrive1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of OneDrive1 from a JSON string
+one_drive1_instance = OneDrive1.from_json(json)
+# print the JSON string representation of the object
+print(OneDrive1.to_json())
+
+# convert the object into a dict
+one_drive1_dict = one_drive1_instance.to_dict()
+# create an instance of OneDrive1 from a dict
+one_drive1_from_dict = OneDrive1.from_dict(one_drive1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/OneDrive2.md b/docs/OneDrive2.md
new file mode 100644
index 0000000..5b5141f
--- /dev/null
+++ b/docs/OneDrive2.md
@@ -0,0 +1,30 @@
+# OneDrive2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"ONE_DRIVE\") |
+
+## Example
+
+```python
+from vectorize_client.models.one_drive2 import OneDrive2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of OneDrive2 from a JSON string
+one_drive2_instance = OneDrive2.from_json(json)
+# print the JSON string representation of the object
+print(OneDrive2.to_json())
+
+# convert the object into a dict
+one_drive2_dict = one_drive2_instance.to_dict()
+# create an instance of OneDrive2 from a dict
+one_drive2_from_dict = OneDrive2.from_dict(one_drive2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Openai.md b/docs/Openai.md
new file mode 100644
index 0000000..2ac5bd2
--- /dev/null
+++ b/docs/Openai.md
@@ -0,0 +1,31 @@
+# Openai
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"OPENAI\") |
+**config** | [**OPENAIAuthConfig**](OPENAIAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.openai import Openai
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Openai from a JSON string
+openai_instance = Openai.from_json(json)
+# print the JSON string representation of the object
+print(Openai.to_json())
+
+# convert the object into a dict
+openai_dict = openai_instance.to_dict()
+# create an instance of Openai from a dict
+openai_from_dict = Openai.from_dict(openai_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Openai1.md b/docs/Openai1.md
new file mode 100644
index 0000000..58aafbc
--- /dev/null
+++ b/docs/Openai1.md
@@ -0,0 +1,29 @@
+# Openai1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**OPENAIAuthConfig**](OPENAIAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.openai1 import Openai1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Openai1 from a JSON string
+openai1_instance = Openai1.from_json(json)
+# print the JSON string representation of the object
+print(Openai1.to_json())
+
+# convert the object into a dict
+openai1_dict = openai1_instance.to_dict()
+# create an instance of Openai1 from a dict
+openai1_from_dict = Openai1.from_dict(openai1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Openai2.md b/docs/Openai2.md
new file mode 100644
index 0000000..3f00f74
--- /dev/null
+++ b/docs/Openai2.md
@@ -0,0 +1,30 @@
+# Openai2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"OPENAI\") |
+
+## Example
+
+```python
+from vectorize_client.models.openai2 import Openai2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Openai2 from a JSON string
+openai2_instance = Openai2.from_json(json)
+# print the JSON string representation of the object
+print(Openai2.to_json())
+
+# convert the object into a dict
+openai2_dict = openai2_instance.to_dict()
+# create an instance of Openai2 from a dict
+openai2_from_dict = Openai2.from_dict(openai2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PINECONEAuthConfig.md b/docs/PINECONEAuthConfig.md
new file mode 100644
index 0000000..d68fdaa
--- /dev/null
+++ b/docs/PINECONEAuthConfig.md
@@ -0,0 +1,31 @@
+# PINECONEAuthConfig
+
+Authentication configuration for Pinecone
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Pinecone integration |
+**api_key** | **str** | API Key. Example: Enter your API Key |
+
+## Example
+
+```python
+from vectorize_client.models.pinecone_auth_config import PINECONEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PINECONEAuthConfig from a JSON string
+pinecone_auth_config_instance = PINECONEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(PINECONEAuthConfig.to_json())
+
+# convert the object into a dict
+pinecone_auth_config_dict = pinecone_auth_config_instance.to_dict()
+# create an instance of PINECONEAuthConfig from a dict
+pinecone_auth_config_from_dict = PINECONEAuthConfig.from_dict(pinecone_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PINECONEConfig.md b/docs/PINECONEConfig.md
new file mode 100644
index 0000000..79cb1c0
--- /dev/null
+++ b/docs/PINECONEConfig.md
@@ -0,0 +1,31 @@
+# PINECONEConfig
+
+Configuration for Pinecone connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**index** | **str** | Index Name. Example: Enter index name |
+**namespace** | **str** | Namespace. Example: Enter namespace | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.pinecone_config import PINECONEConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PINECONEConfig from a JSON string
+pinecone_config_instance = PINECONEConfig.from_json(json)
+# print the JSON string representation of the object
+print(PINECONEConfig.to_json())
+
+# convert the object into a dict
+pinecone_config_dict = pinecone_config_instance.to_dict()
+# create an instance of PINECONEConfig from a dict
+pinecone_config_from_dict = PINECONEConfig.from_dict(pinecone_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/POSTGRESQLAuthConfig.md b/docs/POSTGRESQLAuthConfig.md
new file mode 100644
index 0000000..6947595
--- /dev/null
+++ b/docs/POSTGRESQLAuthConfig.md
@@ -0,0 +1,35 @@
+# POSTGRESQLAuthConfig
+
+Authentication configuration for PostgreSQL
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your PostgreSQL integration |
+**host** | **str** | Host. Example: Enter the host of the deployment |
+**port** | **float** | Port. Example: Enter the port of the deployment | [optional] [default to 5432]
+**database** | **str** | Database. Example: Enter the database name |
+**username** | **str** | Username. Example: Enter the username |
+**password** | **str** | Password. Example: Enter the username's password |
+
+## Example
+
+```python
+from vectorize_client.models.postgresql_auth_config import POSTGRESQLAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of POSTGRESQLAuthConfig from a JSON string
+postgresql_auth_config_instance = POSTGRESQLAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(POSTGRESQLAuthConfig.to_json())
+
+# convert the object into a dict
+postgresql_auth_config_dict = postgresql_auth_config_instance.to_dict()
+# create an instance of POSTGRESQLAuthConfig from a dict
+postgresql_auth_config_from_dict = POSTGRESQLAuthConfig.from_dict(postgresql_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/POSTGRESQLConfig.md b/docs/POSTGRESQLConfig.md
new file mode 100644
index 0000000..8a4acb2
--- /dev/null
+++ b/docs/POSTGRESQLConfig.md
@@ -0,0 +1,30 @@
+# POSTGRESQLConfig
+
+Configuration for PostgreSQL connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**table** | **str** | Table Name. Example: Enter <table name> or <schema>.<table name> |
+
+## Example
+
+```python
+from vectorize_client.models.postgresql_config import POSTGRESQLConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of POSTGRESQLConfig from a JSON string
+postgresql_config_instance = POSTGRESQLConfig.from_json(json)
+# print the JSON string representation of the object
+print(POSTGRESQLConfig.to_json())
+
+# convert the object into a dict
+postgresql_config_dict = postgresql_config_instance.to_dict()
+# create an instance of POSTGRESQLConfig from a dict
+postgresql_config_from_dict = POSTGRESQLConfig.from_dict(postgresql_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Pinecone.md b/docs/Pinecone.md
new file mode 100644
index 0000000..b104ef4
--- /dev/null
+++ b/docs/Pinecone.md
@@ -0,0 +1,31 @@
+# Pinecone
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"PINECONE\") |
+**config** | [**PINECONEConfig**](PINECONEConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.pinecone import Pinecone
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Pinecone from a JSON string
+pinecone_instance = Pinecone.from_json(json)
+# print the JSON string representation of the object
+print(Pinecone.to_json())
+
+# convert the object into a dict
+pinecone_dict = pinecone_instance.to_dict()
+# create an instance of Pinecone from a dict
+pinecone_from_dict = Pinecone.from_dict(pinecone_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Pinecone1.md b/docs/Pinecone1.md
new file mode 100644
index 0000000..1debb9f
--- /dev/null
+++ b/docs/Pinecone1.md
@@ -0,0 +1,29 @@
+# Pinecone1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**PINECONEConfig**](PINECONEConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.pinecone1 import Pinecone1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Pinecone1 from a JSON string
+pinecone1_instance = Pinecone1.from_json(json)
+# print the JSON string representation of the object
+print(Pinecone1.to_json())
+
+# convert the object into a dict
+pinecone1_dict = pinecone1_instance.to_dict()
+# create an instance of Pinecone1 from a dict
+pinecone1_from_dict = Pinecone1.from_dict(pinecone1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Pinecone2.md b/docs/Pinecone2.md
new file mode 100644
index 0000000..e22ea34
--- /dev/null
+++ b/docs/Pinecone2.md
@@ -0,0 +1,30 @@
+# Pinecone2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"PINECONE\") |
+
+## Example
+
+```python
+from vectorize_client.models.pinecone2 import Pinecone2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Pinecone2 from a JSON string
+pinecone2_instance = Pinecone2.from_json(json)
+# print the JSON string representation of the object
+print(Pinecone2.to_json())
+
+# convert the object into a dict
+pinecone2_dict = pinecone2_instance.to_dict()
+# create an instance of Pinecone2 from a dict
+pinecone2_from_dict = Pinecone2.from_dict(pinecone2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineAIPlatformRequestInner.md b/docs/PipelineAIPlatformRequestInner.md
new file mode 100644
index 0000000..cb5cef0
--- /dev/null
+++ b/docs/PipelineAIPlatformRequestInner.md
@@ -0,0 +1,30 @@
+# PipelineAIPlatformRequestInner
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"BEDROCK\") |
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_ai_platform_request_inner import PipelineAIPlatformRequestInner
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineAIPlatformRequestInner from a JSON string
+pipeline_ai_platform_request_inner_instance = PipelineAIPlatformRequestInner.from_json(json)
+# print the JSON string representation of the object
+print(PipelineAIPlatformRequestInner.to_json())
+
+# convert the object into a dict
+pipeline_ai_platform_request_inner_dict = pipeline_ai_platform_request_inner_instance.to_dict()
+# create an instance of PipelineAIPlatformRequestInner from a dict
+pipeline_ai_platform_request_inner_from_dict = PipelineAIPlatformRequestInner.from_dict(pipeline_ai_platform_request_inner_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineConfigurationSchema.md b/docs/PipelineConfigurationSchema.md
new file mode 100644
index 0000000..9de2b40
--- /dev/null
+++ b/docs/PipelineConfigurationSchema.md
@@ -0,0 +1,33 @@
+# PipelineConfigurationSchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**source_connectors** | [**List[PipelineSourceConnectorRequestInner]**](PipelineSourceConnectorRequestInner.md) | |
+**destination_connector** | [**List[PipelineDestinationConnectorRequestInner]**](PipelineDestinationConnectorRequestInner.md) | |
+**ai_platform** | [**List[PipelineAIPlatformRequestInner]**](PipelineAIPlatformRequestInner.md) | |
+**pipeline_name** | **str** | |
+**schedule** | [**ScheduleSchema**](ScheduleSchema.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_configuration_schema import PipelineConfigurationSchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineConfigurationSchema from a JSON string
+pipeline_configuration_schema_instance = PipelineConfigurationSchema.from_json(json)
+# print the JSON string representation of the object
+print(PipelineConfigurationSchema.to_json())
+
+# convert the object into a dict
+pipeline_configuration_schema_dict = pipeline_configuration_schema_instance.to_dict()
+# create an instance of PipelineConfigurationSchema from a dict
+pipeline_configuration_schema_from_dict = PipelineConfigurationSchema.from_dict(pipeline_configuration_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineDestinationConnectorRequestInner.md b/docs/PipelineDestinationConnectorRequestInner.md
new file mode 100644
index 0000000..232c0d0
--- /dev/null
+++ b/docs/PipelineDestinationConnectorRequestInner.md
@@ -0,0 +1,30 @@
+# PipelineDestinationConnectorRequestInner
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"CAPELLA\") |
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_destination_connector_request_inner import PipelineDestinationConnectorRequestInner
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineDestinationConnectorRequestInner from a JSON string
+pipeline_destination_connector_request_inner_instance = PipelineDestinationConnectorRequestInner.from_json(json)
+# print the JSON string representation of the object
+print(PipelineDestinationConnectorRequestInner.to_json())
+
+# convert the object into a dict
+pipeline_destination_connector_request_inner_dict = pipeline_destination_connector_request_inner_instance.to_dict()
+# create an instance of PipelineDestinationConnectorRequestInner from a dict
+pipeline_destination_connector_request_inner_from_dict = PipelineDestinationConnectorRequestInner.from_dict(pipeline_destination_connector_request_inner_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineEvents.md b/docs/PipelineEvents.md
new file mode 100644
index 0000000..07eb1fb
--- /dev/null
+++ b/docs/PipelineEvents.md
@@ -0,0 +1,33 @@
+# PipelineEvents
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | **str** | |
+**timestamp** | **str** | |
+**details** | **Dict[str, Optional[object]]** | | [optional]
+**summary** | **Dict[str, Optional[object]]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_events import PipelineEvents
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineEvents from a JSON string
+pipeline_events_instance = PipelineEvents.from_json(json)
+# print the JSON string representation of the object
+print(PipelineEvents.to_json())
+
+# convert the object into a dict
+pipeline_events_dict = pipeline_events_instance.to_dict()
+# create an instance of PipelineEvents from a dict
+pipeline_events_from_dict = PipelineEvents.from_dict(pipeline_events_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineListSummary.md b/docs/PipelineListSummary.md
new file mode 100644
index 0000000..5026cf2
--- /dev/null
+++ b/docs/PipelineListSummary.md
@@ -0,0 +1,41 @@
+# PipelineListSummary
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**name** | **str** | |
+**document_count** | **float** | |
+**source_connector_auth_ids** | **List[str]** | |
+**destination_connector_auth_ids** | **List[str]** | |
+**ai_platform_auth_ids** | **List[str]** | |
+**source_connector_types** | **List[str]** | |
+**destination_connector_types** | **List[str]** | |
+**ai_platform_types** | **List[str]** | |
+**created_at** | **str** | |
+**created_by** | **str** | |
+**status** | **str** | | [optional]
+**config_doc** | **Dict[str, Optional[object]]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_list_summary import PipelineListSummary
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineListSummary from a JSON string
+pipeline_list_summary_instance = PipelineListSummary.from_json(json)
+# print the JSON string representation of the object
+print(PipelineListSummary.to_json())
+
+# convert the object into a dict
+pipeline_list_summary_dict = pipeline_list_summary_instance.to_dict()
+# create an instance of PipelineListSummary from a dict
+pipeline_list_summary_from_dict = PipelineListSummary.from_dict(pipeline_list_summary_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineMetrics.md b/docs/PipelineMetrics.md
new file mode 100644
index 0000000..413e5a3
--- /dev/null
+++ b/docs/PipelineMetrics.md
@@ -0,0 +1,32 @@
+# PipelineMetrics
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**timestamp** | **str** | |
+**new_objects** | **float** | |
+**changed_objects** | **float** | |
+**deleted_objects** | **float** | |
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_metrics import PipelineMetrics
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineMetrics from a JSON string
+pipeline_metrics_instance = PipelineMetrics.from_json(json)
+# print the JSON string representation of the object
+print(PipelineMetrics.to_json())
+
+# convert the object into a dict
+pipeline_metrics_dict = pipeline_metrics_instance.to_dict()
+# create an instance of PipelineMetrics from a dict
+pipeline_metrics_from_dict = PipelineMetrics.from_dict(pipeline_metrics_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineSourceConnectorRequestInner.md b/docs/PipelineSourceConnectorRequestInner.md
new file mode 100644
index 0000000..1694a74
--- /dev/null
+++ b/docs/PipelineSourceConnectorRequestInner.md
@@ -0,0 +1,30 @@
+# PipelineSourceConnectorRequestInner
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"AWS_S3\") |
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_source_connector_request_inner import PipelineSourceConnectorRequestInner
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineSourceConnectorRequestInner from a JSON string
+pipeline_source_connector_request_inner_instance = PipelineSourceConnectorRequestInner.from_json(json)
+# print the JSON string representation of the object
+print(PipelineSourceConnectorRequestInner.to_json())
+
+# convert the object into a dict
+pipeline_source_connector_request_inner_dict = pipeline_source_connector_request_inner_instance.to_dict()
+# create an instance of PipelineSourceConnectorRequestInner from a dict
+pipeline_source_connector_request_inner_from_dict = PipelineSourceConnectorRequestInner.from_dict(pipeline_source_connector_request_inner_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelineSummary.md b/docs/PipelineSummary.md
new file mode 100644
index 0000000..a269f34
--- /dev/null
+++ b/docs/PipelineSummary.md
@@ -0,0 +1,44 @@
+# PipelineSummary
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**name** | **str** | |
+**document_count** | **float** | |
+**source_connector_auth_ids** | **List[str]** | |
+**destination_connector_auth_ids** | **List[str]** | |
+**ai_platform_auth_ids** | **List[str]** | |
+**source_connector_types** | **List[str]** | |
+**destination_connector_types** | **List[str]** | |
+**ai_platform_types** | **List[str]** | |
+**created_at** | **str** | |
+**created_by** | **str** | |
+**status** | **str** | | [optional]
+**config_doc** | **Dict[str, Optional[object]]** | | [optional]
+**source_connectors** | [**List[SourceConnector]**](SourceConnector.md) | |
+**destination_connectors** | [**List[DestinationConnector]**](DestinationConnector.md) | |
+**ai_platforms** | [**List[AIPlatform]**](AIPlatform.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.pipeline_summary import PipelineSummary
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of PipelineSummary from a JSON string
+pipeline_summary_instance = PipelineSummary.from_json(json)
+# print the JSON string representation of the object
+print(PipelineSummary.to_json())
+
+# convert the object into a dict
+pipeline_summary_dict = pipeline_summary_instance.to_dict()
+# create an instance of PipelineSummary from a dict
+pipeline_summary_from_dict = PipelineSummary.from_dict(pipeline_summary_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/PipelinesApi.md b/docs/PipelinesApi.md
new file mode 100644
index 0000000..4b4136d
--- /dev/null
+++ b/docs/PipelinesApi.md
@@ -0,0 +1,945 @@
+# vectorize_client.PipelinesApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**create_pipeline**](PipelinesApi.md#create_pipeline) | **POST** /org/{organizationId}/pipelines | Create a new pipeline
+[**delete_pipeline**](PipelinesApi.md#delete_pipeline) | **DELETE** /org/{organizationId}/pipelines/{pipelineId} | Delete a pipeline
+[**get_deep_research_result**](PipelinesApi.md#get_deep_research_result) | **GET** /org/{organizationId}/pipelines/{pipelineId}/deep-research/{researchId} | Get deep research result
+[**get_pipeline**](PipelinesApi.md#get_pipeline) | **GET** /org/{organizationId}/pipelines/{pipelineId} | Get a pipeline
+[**get_pipeline_events**](PipelinesApi.md#get_pipeline_events) | **GET** /org/{organizationId}/pipelines/{pipelineId}/events | Get pipeline events
+[**get_pipeline_metrics**](PipelinesApi.md#get_pipeline_metrics) | **GET** /org/{organizationId}/pipelines/{pipelineId}/metrics | Get pipeline metrics
+[**get_pipelines**](PipelinesApi.md#get_pipelines) | **GET** /org/{organizationId}/pipelines | Get all pipelines
+[**retrieve_documents**](PipelinesApi.md#retrieve_documents) | **POST** /org/{organizationId}/pipelines/{pipelineId}/retrieval | Retrieve documents from a pipeline
+[**start_deep_research**](PipelinesApi.md#start_deep_research) | **POST** /org/{organizationId}/pipelines/{pipelineId}/deep-research | Start a deep research
+[**start_pipeline**](PipelinesApi.md#start_pipeline) | **POST** /org/{organizationId}/pipelines/{pipelineId}/start | Start a pipeline
+[**stop_pipeline**](PipelinesApi.md#stop_pipeline) | **POST** /org/{organizationId}/pipelines/{pipelineId}/stop | Stop a pipeline
+
+
+# **create_pipeline**
+> CreatePipelineResponse create_pipeline(organization_id, pipeline_configuration_schema)
+
+Create a new pipeline
+
+Creates a new pipeline with source connectors, destination connector, and AI platform configuration. The specific configuration fields required depend on the connector types selected.
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.create_pipeline_response import CreatePipelineResponse
+from vectorize_client.models.pipeline_configuration_schema import PipelineConfigurationSchema
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_configuration_schema = {"sourceConnectors":[{"id":"4d61dfa9-ce3c-48df-824f-85d1d7421a84","type":"AWS_S3"}],"destinationConnector":[{"id":"e6d268f5-7164-4411-a24b-3d59c78958c8","type":"CAPELLA"}],"aiPlatform":[{"id":"65b8d1f0-32ad-459f-8799-7d359abf4ee4","type":"BEDROCK"}],"pipelineName":"Data Processing Pipeline","schedule":{"type":"manual"}} # PipelineConfigurationSchema |
+
+ try:
+ # Create a new pipeline
+ api_response = api_instance.create_pipeline(organization_id, pipeline_configuration_schema)
+ print("The response of PipelinesApi->create_pipeline:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->create_pipeline: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_configuration_schema** | [**PipelineConfigurationSchema**](PipelineConfigurationSchema.md)| |
+
+### Return type
+
+[**CreatePipelineResponse**](CreatePipelineResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline created successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **delete_pipeline**
+> DeletePipelineResponse delete_pipeline(organization_id, pipeline_id)
+
+Delete a pipeline
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.delete_pipeline_response import DeletePipelineResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+
+ try:
+ # Delete a pipeline
+ api_response = api_instance.delete_pipeline(organization_id, pipeline_id)
+ print("The response of PipelinesApi->delete_pipeline:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->delete_pipeline: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+
+### Return type
+
+[**DeletePipelineResponse**](DeletePipelineResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline deleted successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_deep_research_result**
+> GetDeepResearchResponse get_deep_research_result(organization, pipeline, research_id)
+
+Get deep research result
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_deep_research_response import GetDeepResearchResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization = 'organization_example' # str |
+ pipeline = 'pipeline_example' # str |
+ research_id = 'research_id_example' # str |
+
+ try:
+ # Get deep research result
+ api_response = api_instance.get_deep_research_result(organization, pipeline, research_id)
+ print("The response of PipelinesApi->get_deep_research_result:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->get_deep_research_result: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **pipeline** | **str**| |
+ **research_id** | **str**| |
+
+### Return type
+
+[**GetDeepResearchResponse**](GetDeepResearchResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Get Deep Research was successful | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_pipeline**
+> GetPipelineResponse get_pipeline(organization_id, pipeline_id)
+
+Get a pipeline
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_pipeline_response import GetPipelineResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+
+ try:
+ # Get a pipeline
+ api_response = api_instance.get_pipeline(organization_id, pipeline_id)
+ print("The response of PipelinesApi->get_pipeline:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->get_pipeline: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+
+### Return type
+
+[**GetPipelineResponse**](GetPipelineResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline fetched successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_pipeline_events**
+> GetPipelineEventsResponse get_pipeline_events(organization_id, pipeline_id, next_token=next_token)
+
+Get pipeline events
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_pipeline_events_response import GetPipelineEventsResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+ next_token = 'next_token_example' # str | (optional)
+
+ try:
+ # Get pipeline events
+ api_response = api_instance.get_pipeline_events(organization_id, pipeline_id, next_token=next_token)
+ print("The response of PipelinesApi->get_pipeline_events:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->get_pipeline_events: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+ **next_token** | **str**| | [optional]
+
+### Return type
+
+[**GetPipelineEventsResponse**](GetPipelineEventsResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline events fetched successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_pipeline_metrics**
+> GetPipelineMetricsResponse get_pipeline_metrics(organization_id, pipeline_id)
+
+Get pipeline metrics
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_pipeline_metrics_response import GetPipelineMetricsResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+
+ try:
+ # Get pipeline metrics
+ api_response = api_instance.get_pipeline_metrics(organization_id, pipeline_id)
+ print("The response of PipelinesApi->get_pipeline_metrics:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->get_pipeline_metrics: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+
+### Return type
+
+[**GetPipelineMetricsResponse**](GetPipelineMetricsResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline metrics fetched successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_pipelines**
+> GetPipelinesResponse get_pipelines(organization_id)
+
+Get all pipelines
+
+Returns a list of all pipelines in the organization
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_pipelines_response import GetPipelinesResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+
+ try:
+ # Get all pipelines
+ api_response = api_instance.get_pipelines(organization_id)
+ print("The response of PipelinesApi->get_pipelines:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->get_pipelines: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+
+### Return type
+
+[**GetPipelinesResponse**](GetPipelinesResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipelines retrieved successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **retrieve_documents**
+> RetrieveDocumentsResponse retrieve_documents(organization_id, pipeline_id, retrieve_documents_request)
+
+Retrieve documents from a pipeline
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.retrieve_documents_request import RetrieveDocumentsRequest
+from vectorize_client.models.retrieve_documents_response import RetrieveDocumentsResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+ retrieve_documents_request = {"question":"example-question","numResults":100,"rerank":true,"metadata-filters":[],"context":{"messages":[]}} # RetrieveDocumentsRequest |
+
+ try:
+ # Retrieve documents from a pipeline
+ api_response = api_instance.retrieve_documents(organization_id, pipeline_id, retrieve_documents_request)
+ print("The response of PipelinesApi->retrieve_documents:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->retrieve_documents: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+ **retrieve_documents_request** | [**RetrieveDocumentsRequest**](RetrieveDocumentsRequest.md)| |
+
+### Return type
+
+[**RetrieveDocumentsResponse**](RetrieveDocumentsResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Documents retrieved successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **start_deep_research**
+> StartDeepResearchResponse start_deep_research(organization_id, pipeline_id, start_deep_research_request)
+
+Start a deep research
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.start_deep_research_request import StartDeepResearchRequest
+from vectorize_client.models.start_deep_research_response import StartDeepResearchResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+ start_deep_research_request = {"query":"example-query","webSearch":true,"schema":"example-schema","n8n":{"account":"example-account","webhookPath":"/example/path","headers":{}}} # StartDeepResearchRequest |
+
+ try:
+ # Start a deep research
+ api_response = api_instance.start_deep_research(organization_id, pipeline_id, start_deep_research_request)
+ print("The response of PipelinesApi->start_deep_research:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->start_deep_research: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+ **start_deep_research_request** | [**StartDeepResearchRequest**](StartDeepResearchRequest.md)| |
+
+### Return type
+
+[**StartDeepResearchResponse**](StartDeepResearchResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Deep Research started successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **start_pipeline**
+> StartPipelineResponse start_pipeline(organization_id, pipeline_id)
+
+Start a pipeline
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.start_pipeline_response import StartPipelineResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+
+ try:
+ # Start a pipeline
+ api_response = api_instance.start_pipeline(organization_id, pipeline_id)
+ print("The response of PipelinesApi->start_pipeline:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->start_pipeline: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+
+### Return type
+
+[**StartPipelineResponse**](StartPipelineResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline started successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **stop_pipeline**
+> StopPipelineResponse stop_pipeline(organization_id, pipeline_id)
+
+Stop a pipeline
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.stop_pipeline_response import StopPipelineResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.PipelinesApi(api_client)
+ organization_id = 'organization_id_example' # str |
+ pipeline_id = 'pipeline_id_example' # str |
+
+ try:
+ # Stop a pipeline
+ api_response = api_instance.stop_pipeline(organization_id, pipeline_id)
+ print("The response of PipelinesApi->stop_pipeline:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling PipelinesApi->stop_pipeline: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization_id** | **str**| |
+ **pipeline_id** | **str**| |
+
+### Return type
+
+[**StopPipelineResponse**](StopPipelineResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Pipeline stopped successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/Postgresql.md b/docs/Postgresql.md
new file mode 100644
index 0000000..0af32d5
--- /dev/null
+++ b/docs/Postgresql.md
@@ -0,0 +1,31 @@
+# Postgresql
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"POSTGRESQL\") |
+**config** | [**POSTGRESQLConfig**](POSTGRESQLConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.postgresql import Postgresql
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Postgresql from a JSON string
+postgresql_instance = Postgresql.from_json(json)
+# print the JSON string representation of the object
+print(Postgresql.to_json())
+
+# convert the object into a dict
+postgresql_dict = postgresql_instance.to_dict()
+# create an instance of Postgresql from a dict
+postgresql_from_dict = Postgresql.from_dict(postgresql_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Postgresql1.md b/docs/Postgresql1.md
new file mode 100644
index 0000000..4889710
--- /dev/null
+++ b/docs/Postgresql1.md
@@ -0,0 +1,29 @@
+# Postgresql1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**POSTGRESQLConfig**](POSTGRESQLConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.postgresql1 import Postgresql1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Postgresql1 from a JSON string
+postgresql1_instance = Postgresql1.from_json(json)
+# print the JSON string representation of the object
+print(Postgresql1.to_json())
+
+# convert the object into a dict
+postgresql1_dict = postgresql1_instance.to_dict()
+# create an instance of Postgresql1 from a dict
+postgresql1_from_dict = Postgresql1.from_dict(postgresql1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Postgresql2.md b/docs/Postgresql2.md
new file mode 100644
index 0000000..99ebeaa
--- /dev/null
+++ b/docs/Postgresql2.md
@@ -0,0 +1,30 @@
+# Postgresql2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"POSTGRESQL\") |
+
+## Example
+
+```python
+from vectorize_client.models.postgresql2 import Postgresql2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Postgresql2 from a JSON string
+postgresql2_instance = Postgresql2.from_json(json)
+# print the JSON string representation of the object
+print(Postgresql2.to_json())
+
+# convert the object into a dict
+postgresql2_dict = postgresql2_instance.to_dict()
+# create an instance of Postgresql2 from a dict
+postgresql2_from_dict = Postgresql2.from_dict(postgresql2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/QDRANTAuthConfig.md b/docs/QDRANTAuthConfig.md
new file mode 100644
index 0000000..8d0f91b
--- /dev/null
+++ b/docs/QDRANTAuthConfig.md
@@ -0,0 +1,32 @@
+# QDRANTAuthConfig
+
+Authentication configuration for Qdrant
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Qdrant integration |
+**host** | **str** | Host. Example: Enter your host |
+**api_key** | **str** | API Key. Example: Enter your API key |
+
+## Example
+
+```python
+from vectorize_client.models.qdrant_auth_config import QDRANTAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of QDRANTAuthConfig from a JSON string
+qdrant_auth_config_instance = QDRANTAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(QDRANTAuthConfig.to_json())
+
+# convert the object into a dict
+qdrant_auth_config_dict = qdrant_auth_config_instance.to_dict()
+# create an instance of QDRANTAuthConfig from a dict
+qdrant_auth_config_from_dict = QDRANTAuthConfig.from_dict(qdrant_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/QDRANTConfig.md b/docs/QDRANTConfig.md
new file mode 100644
index 0000000..a66d012
--- /dev/null
+++ b/docs/QDRANTConfig.md
@@ -0,0 +1,30 @@
+# QDRANTConfig
+
+Configuration for Qdrant connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**collection** | **str** | Collection Name. Example: Enter collection name |
+
+## Example
+
+```python
+from vectorize_client.models.qdrant_config import QDRANTConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of QDRANTConfig from a JSON string
+qdrant_config_instance = QDRANTConfig.from_json(json)
+# print the JSON string representation of the object
+print(QDRANTConfig.to_json())
+
+# convert the object into a dict
+qdrant_config_dict = qdrant_config_instance.to_dict()
+# create an instance of QDRANTConfig from a dict
+qdrant_config_from_dict = QDRANTConfig.from_dict(qdrant_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Qdrant.md b/docs/Qdrant.md
new file mode 100644
index 0000000..f4cda66
--- /dev/null
+++ b/docs/Qdrant.md
@@ -0,0 +1,31 @@
+# Qdrant
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"QDRANT\") |
+**config** | [**QDRANTConfig**](QDRANTConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.qdrant import Qdrant
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Qdrant from a JSON string
+qdrant_instance = Qdrant.from_json(json)
+# print the JSON string representation of the object
+print(Qdrant.to_json())
+
+# convert the object into a dict
+qdrant_dict = qdrant_instance.to_dict()
+# create an instance of Qdrant from a dict
+qdrant_from_dict = Qdrant.from_dict(qdrant_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Qdrant1.md b/docs/Qdrant1.md
new file mode 100644
index 0000000..2aad746
--- /dev/null
+++ b/docs/Qdrant1.md
@@ -0,0 +1,29 @@
+# Qdrant1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**QDRANTConfig**](QDRANTConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.qdrant1 import Qdrant1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Qdrant1 from a JSON string
+qdrant1_instance = Qdrant1.from_json(json)
+# print the JSON string representation of the object
+print(Qdrant1.to_json())
+
+# convert the object into a dict
+qdrant1_dict = qdrant1_instance.to_dict()
+# create an instance of Qdrant1 from a dict
+qdrant1_from_dict = Qdrant1.from_dict(qdrant1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Qdrant2.md b/docs/Qdrant2.md
new file mode 100644
index 0000000..8337423
--- /dev/null
+++ b/docs/Qdrant2.md
@@ -0,0 +1,30 @@
+# Qdrant2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"QDRANT\") |
+
+## Example
+
+```python
+from vectorize_client.models.qdrant2 import Qdrant2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Qdrant2 from a JSON string
+qdrant2_instance = Qdrant2.from_json(json)
+# print the JSON string representation of the object
+print(Qdrant2.to_json())
+
+# convert the object into a dict
+qdrant2_dict = qdrant2_instance.to_dict()
+# create an instance of Qdrant2 from a dict
+qdrant2_from_dict = Qdrant2.from_dict(qdrant2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/RemoveUserFromSourceConnectorRequest.md b/docs/RemoveUserFromSourceConnectorRequest.md
new file mode 100644
index 0000000..a34c16a
--- /dev/null
+++ b/docs/RemoveUserFromSourceConnectorRequest.md
@@ -0,0 +1,29 @@
+# RemoveUserFromSourceConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**user_id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.remove_user_from_source_connector_request import RemoveUserFromSourceConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of RemoveUserFromSourceConnectorRequest from a JSON string
+remove_user_from_source_connector_request_instance = RemoveUserFromSourceConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(RemoveUserFromSourceConnectorRequest.to_json())
+
+# convert the object into a dict
+remove_user_from_source_connector_request_dict = remove_user_from_source_connector_request_instance.to_dict()
+# create an instance of RemoveUserFromSourceConnectorRequest from a dict
+remove_user_from_source_connector_request_from_dict = RemoveUserFromSourceConnectorRequest.from_dict(remove_user_from_source_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/RemoveUserFromSourceConnectorResponse.md b/docs/RemoveUserFromSourceConnectorResponse.md
new file mode 100644
index 0000000..74de0f4
--- /dev/null
+++ b/docs/RemoveUserFromSourceConnectorResponse.md
@@ -0,0 +1,29 @@
+# RemoveUserFromSourceConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.remove_user_from_source_connector_response import RemoveUserFromSourceConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of RemoveUserFromSourceConnectorResponse from a JSON string
+remove_user_from_source_connector_response_instance = RemoveUserFromSourceConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(RemoveUserFromSourceConnectorResponse.to_json())
+
+# convert the object into a dict
+remove_user_from_source_connector_response_dict = remove_user_from_source_connector_response_instance.to_dict()
+# create an instance of RemoveUserFromSourceConnectorResponse from a dict
+remove_user_from_source_connector_response_from_dict = RemoveUserFromSourceConnectorResponse.from_dict(remove_user_from_source_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/RetrieveContext.md b/docs/RetrieveContext.md
new file mode 100644
index 0000000..30be06c
--- /dev/null
+++ b/docs/RetrieveContext.md
@@ -0,0 +1,29 @@
+# RetrieveContext
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**messages** | [**List[RetrieveContextMessage]**](RetrieveContextMessage.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.retrieve_context import RetrieveContext
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of RetrieveContext from a JSON string
+retrieve_context_instance = RetrieveContext.from_json(json)
+# print the JSON string representation of the object
+print(RetrieveContext.to_json())
+
+# convert the object into a dict
+retrieve_context_dict = retrieve_context_instance.to_dict()
+# create an instance of RetrieveContext from a dict
+retrieve_context_from_dict = RetrieveContext.from_dict(retrieve_context_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/RetrieveContextMessage.md b/docs/RetrieveContextMessage.md
new file mode 100644
index 0000000..8f2c6ac
--- /dev/null
+++ b/docs/RetrieveContextMessage.md
@@ -0,0 +1,30 @@
+# RetrieveContextMessage
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**role** | **str** | |
+**content** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.retrieve_context_message import RetrieveContextMessage
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of RetrieveContextMessage from a JSON string
+retrieve_context_message_instance = RetrieveContextMessage.from_json(json)
+# print the JSON string representation of the object
+print(RetrieveContextMessage.to_json())
+
+# convert the object into a dict
+retrieve_context_message_dict = retrieve_context_message_instance.to_dict()
+# create an instance of RetrieveContextMessage from a dict
+retrieve_context_message_from_dict = RetrieveContextMessage.from_dict(retrieve_context_message_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/RetrieveDocumentsRequest.md b/docs/RetrieveDocumentsRequest.md
new file mode 100644
index 0000000..392856b
--- /dev/null
+++ b/docs/RetrieveDocumentsRequest.md
@@ -0,0 +1,34 @@
+# RetrieveDocumentsRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**question** | **str** | |
+**num_results** | **float** | |
+**rerank** | **bool** | | [optional] [default to True]
+**metadata_filters** | **List[Dict[str, Optional[object]]]** | | [optional]
+**context** | [**RetrieveContext**](RetrieveContext.md) | | [optional]
+**advanced_query** | [**AdvancedQuery**](AdvancedQuery.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.retrieve_documents_request import RetrieveDocumentsRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of RetrieveDocumentsRequest from a JSON string
+retrieve_documents_request_instance = RetrieveDocumentsRequest.from_json(json)
+# print the JSON string representation of the object
+print(RetrieveDocumentsRequest.to_json())
+
+# convert the object into a dict
+retrieve_documents_request_dict = retrieve_documents_request_instance.to_dict()
+# create an instance of RetrieveDocumentsRequest from a dict
+retrieve_documents_request_from_dict = RetrieveDocumentsRequest.from_dict(retrieve_documents_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/RetrieveDocumentsResponse.md b/docs/RetrieveDocumentsResponse.md
new file mode 100644
index 0000000..75c2951
--- /dev/null
+++ b/docs/RetrieveDocumentsResponse.md
@@ -0,0 +1,32 @@
+# RetrieveDocumentsResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**question** | **str** | |
+**documents** | [**List[Document]**](Document.md) | |
+**average_relevancy** | **float** | |
+**ndcg** | **float** | |
+
+## Example
+
+```python
+from vectorize_client.models.retrieve_documents_response import RetrieveDocumentsResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of RetrieveDocumentsResponse from a JSON string
+retrieve_documents_response_instance = RetrieveDocumentsResponse.from_json(json)
+# print the JSON string representation of the object
+print(RetrieveDocumentsResponse.to_json())
+
+# convert the object into a dict
+retrieve_documents_response_dict = retrieve_documents_response_instance.to_dict()
+# create an instance of RetrieveDocumentsResponse from a dict
+retrieve_documents_response_from_dict = RetrieveDocumentsResponse.from_dict(retrieve_documents_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SHAREPOINTAuthConfig.md b/docs/SHAREPOINTAuthConfig.md
new file mode 100644
index 0000000..f3516bd
--- /dev/null
+++ b/docs/SHAREPOINTAuthConfig.md
@@ -0,0 +1,33 @@
+# SHAREPOINTAuthConfig
+
+Authentication configuration for SharePoint
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**ms_client_id** | **str** | Client Id. Example: Enter Client Id |
+**ms_tenant_id** | **str** | Tenant Id. Example: Enter Tenant Id |
+**ms_client_secret** | **str** | Client Secret. Example: Enter Client Secret |
+
+## Example
+
+```python
+from vectorize_client.models.sharepoint_auth_config import SHAREPOINTAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SHAREPOINTAuthConfig from a JSON string
+sharepoint_auth_config_instance = SHAREPOINTAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(SHAREPOINTAuthConfig.to_json())
+
+# convert the object into a dict
+sharepoint_auth_config_dict = sharepoint_auth_config_instance.to_dict()
+# create an instance of SHAREPOINTAuthConfig from a dict
+sharepoint_auth_config_from_dict = SHAREPOINTAuthConfig.from_dict(sharepoint_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SHAREPOINTConfig.md b/docs/SHAREPOINTConfig.md
new file mode 100644
index 0000000..fcc0e78
--- /dev/null
+++ b/docs/SHAREPOINTConfig.md
@@ -0,0 +1,31 @@
+# SHAREPOINTConfig
+
+Configuration for SharePoint connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**sites** | **str** | Site Name(s). Example: Filter by site name. All sites if empty. | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.sharepoint_config import SHAREPOINTConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SHAREPOINTConfig from a JSON string
+sharepoint_config_instance = SHAREPOINTConfig.from_json(json)
+# print the JSON string representation of the object
+print(SHAREPOINTConfig.to_json())
+
+# convert the object into a dict
+sharepoint_config_dict = sharepoint_config_instance.to_dict()
+# create an instance of SHAREPOINTConfig from a dict
+sharepoint_config_from_dict = SHAREPOINTConfig.from_dict(sharepoint_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SINGLESTOREAuthConfig.md b/docs/SINGLESTOREAuthConfig.md
new file mode 100644
index 0000000..86f4857
--- /dev/null
+++ b/docs/SINGLESTOREAuthConfig.md
@@ -0,0 +1,35 @@
+# SINGLESTOREAuthConfig
+
+Authentication configuration for SingleStore
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your SingleStore integration |
+**host** | **str** | Host. Example: Enter the host of the deployment |
+**port** | **float** | Port. Example: Enter the port of the deployment |
+**database** | **str** | Database. Example: Enter the database name |
+**username** | **str** | Username. Example: Enter the username |
+**password** | **str** | Password. Example: Enter the username's password |
+
+## Example
+
+```python
+from vectorize_client.models.singlestore_auth_config import SINGLESTOREAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SINGLESTOREAuthConfig from a JSON string
+singlestore_auth_config_instance = SINGLESTOREAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(SINGLESTOREAuthConfig.to_json())
+
+# convert the object into a dict
+singlestore_auth_config_dict = singlestore_auth_config_instance.to_dict()
+# create an instance of SINGLESTOREAuthConfig from a dict
+singlestore_auth_config_from_dict = SINGLESTOREAuthConfig.from_dict(singlestore_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SINGLESTOREConfig.md b/docs/SINGLESTOREConfig.md
new file mode 100644
index 0000000..4b2b33f
--- /dev/null
+++ b/docs/SINGLESTOREConfig.md
@@ -0,0 +1,30 @@
+# SINGLESTOREConfig
+
+Configuration for SingleStore connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**table** | **str** | Table Name. Example: Enter table name |
+
+## Example
+
+```python
+from vectorize_client.models.singlestore_config import SINGLESTOREConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SINGLESTOREConfig from a JSON string
+singlestore_config_instance = SINGLESTOREConfig.from_json(json)
+# print the JSON string representation of the object
+print(SINGLESTOREConfig.to_json())
+
+# convert the object into a dict
+singlestore_config_dict = singlestore_config_instance.to_dict()
+# create an instance of SINGLESTOREConfig from a dict
+singlestore_config_from_dict = SINGLESTOREConfig.from_dict(singlestore_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SUPABASEAuthConfig.md b/docs/SUPABASEAuthConfig.md
new file mode 100644
index 0000000..5eedb64
--- /dev/null
+++ b/docs/SUPABASEAuthConfig.md
@@ -0,0 +1,35 @@
+# SUPABASEAuthConfig
+
+Authentication configuration for Supabase
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Supabase integration |
+**host** | **str** | Host. Example: Enter the host of the deployment | [default to 'aws-0-us-east-1.pooler.supabase.com']
+**port** | **float** | Port. Example: Enter the port of the deployment | [optional] [default to 5432]
+**database** | **str** | Database. Example: Enter the database name |
+**username** | **str** | Username. Example: Enter the username |
+**password** | **str** | Password. Example: Enter the username's password |
+
+## Example
+
+```python
+from vectorize_client.models.supabase_auth_config import SUPABASEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SUPABASEAuthConfig from a JSON string
+supabase_auth_config_instance = SUPABASEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(SUPABASEAuthConfig.to_json())
+
+# convert the object into a dict
+supabase_auth_config_dict = supabase_auth_config_instance.to_dict()
+# create an instance of SUPABASEAuthConfig from a dict
+supabase_auth_config_from_dict = SUPABASEAuthConfig.from_dict(supabase_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SUPABASEConfig.md b/docs/SUPABASEConfig.md
new file mode 100644
index 0000000..5328143
--- /dev/null
+++ b/docs/SUPABASEConfig.md
@@ -0,0 +1,30 @@
+# SUPABASEConfig
+
+Configuration for Supabase connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**table** | **str** | Table Name. Example: Enter <table name> or <schema>.<table name> |
+
+## Example
+
+```python
+from vectorize_client.models.supabase_config import SUPABASEConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SUPABASEConfig from a JSON string
+supabase_config_instance = SUPABASEConfig.from_json(json)
+# print the JSON string representation of the object
+print(SUPABASEConfig.to_json())
+
+# convert the object into a dict
+supabase_config_dict = supabase_config_instance.to_dict()
+# create an instance of SUPABASEConfig from a dict
+supabase_config_from_dict = SUPABASEConfig.from_dict(supabase_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ScheduleSchema.md b/docs/ScheduleSchema.md
new file mode 100644
index 0000000..7a8f93b
--- /dev/null
+++ b/docs/ScheduleSchema.md
@@ -0,0 +1,29 @@
+# ScheduleSchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**type** | [**ScheduleSchemaType**](ScheduleSchemaType.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.schedule_schema import ScheduleSchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of ScheduleSchema from a JSON string
+schedule_schema_instance = ScheduleSchema.from_json(json)
+# print the JSON string representation of the object
+print(ScheduleSchema.to_json())
+
+# convert the object into a dict
+schedule_schema_dict = schedule_schema_instance.to_dict()
+# create an instance of ScheduleSchema from a dict
+schedule_schema_from_dict = ScheduleSchema.from_dict(schedule_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/ScheduleSchemaType.md b/docs/ScheduleSchemaType.md
new file mode 100644
index 0000000..3da057e
--- /dev/null
+++ b/docs/ScheduleSchemaType.md
@@ -0,0 +1,14 @@
+# ScheduleSchemaType
+
+
+## Enum
+
+* `MANUAL` (value: `'manual'`)
+
+* `REALTIME` (value: `'realtime'`)
+
+* `CUSTOM` (value: `'custom'`)
+
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Sharepoint.md b/docs/Sharepoint.md
new file mode 100644
index 0000000..2595c5b
--- /dev/null
+++ b/docs/Sharepoint.md
@@ -0,0 +1,31 @@
+# Sharepoint
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"SHAREPOINT\") |
+**config** | [**SHAREPOINTConfig**](SHAREPOINTConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.sharepoint import Sharepoint
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Sharepoint from a JSON string
+sharepoint_instance = Sharepoint.from_json(json)
+# print the JSON string representation of the object
+print(Sharepoint.to_json())
+
+# convert the object into a dict
+sharepoint_dict = sharepoint_instance.to_dict()
+# create an instance of Sharepoint from a dict
+sharepoint_from_dict = Sharepoint.from_dict(sharepoint_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Sharepoint1.md b/docs/Sharepoint1.md
new file mode 100644
index 0000000..b97802a
--- /dev/null
+++ b/docs/Sharepoint1.md
@@ -0,0 +1,29 @@
+# Sharepoint1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**SHAREPOINTConfig**](SHAREPOINTConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.sharepoint1 import Sharepoint1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Sharepoint1 from a JSON string
+sharepoint1_instance = Sharepoint1.from_json(json)
+# print the JSON string representation of the object
+print(Sharepoint1.to_json())
+
+# convert the object into a dict
+sharepoint1_dict = sharepoint1_instance.to_dict()
+# create an instance of Sharepoint1 from a dict
+sharepoint1_from_dict = Sharepoint1.from_dict(sharepoint1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Sharepoint2.md b/docs/Sharepoint2.md
new file mode 100644
index 0000000..858414c
--- /dev/null
+++ b/docs/Sharepoint2.md
@@ -0,0 +1,30 @@
+# Sharepoint2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"SHAREPOINT\") |
+
+## Example
+
+```python
+from vectorize_client.models.sharepoint2 import Sharepoint2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Sharepoint2 from a JSON string
+sharepoint2_instance = Sharepoint2.from_json(json)
+# print the JSON string representation of the object
+print(Sharepoint2.to_json())
+
+# convert the object into a dict
+sharepoint2_dict = sharepoint2_instance.to_dict()
+# create an instance of Sharepoint2 from a dict
+sharepoint2_from_dict = Sharepoint2.from_dict(sharepoint2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Singlestore.md b/docs/Singlestore.md
new file mode 100644
index 0000000..54ff31f
--- /dev/null
+++ b/docs/Singlestore.md
@@ -0,0 +1,31 @@
+# Singlestore
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"SINGLESTORE\") |
+**config** | [**SINGLESTOREConfig**](SINGLESTOREConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.singlestore import Singlestore
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Singlestore from a JSON string
+singlestore_instance = Singlestore.from_json(json)
+# print the JSON string representation of the object
+print(Singlestore.to_json())
+
+# convert the object into a dict
+singlestore_dict = singlestore_instance.to_dict()
+# create an instance of Singlestore from a dict
+singlestore_from_dict = Singlestore.from_dict(singlestore_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Singlestore1.md b/docs/Singlestore1.md
new file mode 100644
index 0000000..0922783
--- /dev/null
+++ b/docs/Singlestore1.md
@@ -0,0 +1,29 @@
+# Singlestore1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**SINGLESTOREConfig**](SINGLESTOREConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.singlestore1 import Singlestore1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Singlestore1 from a JSON string
+singlestore1_instance = Singlestore1.from_json(json)
+# print the JSON string representation of the object
+print(Singlestore1.to_json())
+
+# convert the object into a dict
+singlestore1_dict = singlestore1_instance.to_dict()
+# create an instance of Singlestore1 from a dict
+singlestore1_from_dict = Singlestore1.from_dict(singlestore1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Singlestore2.md b/docs/Singlestore2.md
new file mode 100644
index 0000000..aeaddee
--- /dev/null
+++ b/docs/Singlestore2.md
@@ -0,0 +1,30 @@
+# Singlestore2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"SINGLESTORE\") |
+
+## Example
+
+```python
+from vectorize_client.models.singlestore2 import Singlestore2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Singlestore2 from a JSON string
+singlestore2_instance = Singlestore2.from_json(json)
+# print the JSON string representation of the object
+print(Singlestore2.to_json())
+
+# convert the object into a dict
+singlestore2_dict = singlestore2_instance.to_dict()
+# create an instance of Singlestore2 from a dict
+singlestore2_from_dict = Singlestore2.from_dict(singlestore2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SourceConnector.md b/docs/SourceConnector.md
new file mode 100644
index 0000000..295d952
--- /dev/null
+++ b/docs/SourceConnector.md
@@ -0,0 +1,39 @@
+# SourceConnector
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | **str** | |
+**name** | **str** | |
+**config_doc** | **Dict[str, Optional[object]]** | | [optional]
+**created_at** | **str** | | [optional]
+**created_by_id** | **str** | | [optional]
+**last_updated_by_id** | **str** | | [optional]
+**created_by_email** | **str** | | [optional]
+**last_updated_by_email** | **str** | | [optional]
+**error_message** | **str** | | [optional]
+**verification_status** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.source_connector import SourceConnector
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SourceConnector from a JSON string
+source_connector_instance = SourceConnector.from_json(json)
+# print the JSON string representation of the object
+print(SourceConnector.to_json())
+
+# convert the object into a dict
+source_connector_dict = source_connector_instance.to_dict()
+# create an instance of SourceConnector from a dict
+source_connector_from_dict = SourceConnector.from_dict(source_connector_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SourceConnectorInput.md b/docs/SourceConnectorInput.md
new file mode 100644
index 0000000..609c590
--- /dev/null
+++ b/docs/SourceConnectorInput.md
@@ -0,0 +1,32 @@
+# SourceConnectorInput
+
+Source connector configuration
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the source connector |
+**type** | **str** | Type of source connector |
+**config** | [**SourceConnectorInputConfig**](SourceConnectorInputConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.source_connector_input import SourceConnectorInput
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SourceConnectorInput from a JSON string
+source_connector_input_instance = SourceConnectorInput.from_json(json)
+# print the JSON string representation of the object
+print(SourceConnectorInput.to_json())
+
+# convert the object into a dict
+source_connector_input_dict = source_connector_input_instance.to_dict()
+# create an instance of SourceConnectorInput from a dict
+source_connector_input_from_dict = SourceConnectorInput.from_dict(source_connector_input_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SourceConnectorInputConfig.md b/docs/SourceConnectorInputConfig.md
new file mode 100644
index 0000000..0ee3030
--- /dev/null
+++ b/docs/SourceConnectorInputConfig.md
@@ -0,0 +1,78 @@
+# SourceConnectorInputConfig
+
+Configuration specific to the connector type
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_extensions** | **List[str]** | File Extensions |
+**idle_time** | **float** | Check for updates every (seconds) | [default to 5]
+**recursive** | **bool** | Recursively scan all folders in the bucket | [optional]
+**path_prefix** | **str** | Read starting from this folder (optional). Example: Enter Folder path: /exampleFolder/subFolder | [optional]
+**path_metadata_regex** | **str** | Path Metadata Regex | [optional]
+**path_regex_group_names** | **str** | Path Regex Group Names. Example: Enter Group Name | [optional]
+**spaces** | **str** | Spaces. Example: Spaces to include (name, key or id) |
+**root_parents** | **str** | Restrict ingest to these folder URLs (optional). Example: Enter Folder URLs. Example: https://drive.google.com/drive/folders/1234aBCd5678_eFgH9012iJKL3456opqr | [optional]
+**emoji** | **str** | Emoji Filter. Example: Enter custom emoji filter name | [optional]
+**author** | **str** | Author Filter. Example: Enter author name | [optional]
+**ignore_author** | **str** | Ignore Author Filter. Example: Enter ignore author name | [optional]
+**limit** | **float** | Limit. Example: Enter limit | [optional] [default to 10000]
+**thread_message_inclusion** | **str** | Thread Message Inclusion | [optional] [default to 'ALL']
+**filter_logic** | **str** | Filter Logic | [optional] [default to 'AND']
+**thread_message_mode** | **str** | Thread Message Mode | [optional] [default to 'CONCATENATE']
+**endpoint** | **str** | Endpoint. Example: Choose which api endpoint to use | [default to 'Crawl']
+**request** | **object** | Request Body. Example: JSON config for firecrawl's /crawl or /scrape endpoint. |
+**created_at** | **date** | Created After. Filter for conversation created after this date. Example: Enter a date: Example 2012-12-31 |
+**updated_at** | **date** | Updated After. Filter for conversation updated after this date. Example: Enter a date: Example 2012-12-31 | [optional]
+**state** | **List[str]** | State | [optional]
+**select_resources** | **str** | Select Notion Resources |
+**database_ids** | **str** | Database IDs |
+**database_names** | **str** | Database Names |
+**page_ids** | **str** | Page IDs |
+**page_names** | **str** | Page Names |
+**sites** | **str** | Site Name(s). Example: Filter by site name. All sites if empty. | [optional]
+**allowed_domains_opt** | **str** | Additional Allowed URLs or prefix(es). Add one or more allowed URLs or URL prefixes. The crawler will read URLs that match these patterns in addition to the seed URL(s).. Example: (e.g. https://docs.example.com) | [optional]
+**forbidden_paths** | **str** | Forbidden Paths. Example: Enter forbidden paths (e.g. /admin) | [optional]
+**min_time_between_requests** | **float** | Throttle (ms). Example: Enter minimum time between requests in milliseconds | [optional] [default to 500]
+**max_error_count** | **float** | Max Error Count. Example: Enter maximum error count | [optional] [default to 5]
+**max_urls** | **float** | Max URLs. Example: Enter maximum number of URLs to crawl | [optional] [default to 1000]
+**max_depth** | **float** | Max Depth. Example: Enter maximum crawl depth | [optional] [default to 50]
+**reindex_interval_seconds** | **float** | Reindex Interval (seconds). Example: Enter reindex interval in seconds | [optional] [default to 3600]
+**repositories** | **str** | Repositories. Example: Example: owner1/repo1 |
+**include_pull_requests** | **bool** | Include Pull Requests | [default to True]
+**pull_request_status** | **str** | Pull Request Status | [default to 'all']
+**pull_request_labels** | **str** | Pull Request Labels. Example: Optionally filter by label. E.g. fix | [optional]
+**include_issues** | **bool** | Include Issues | [default to True]
+**issue_status** | **str** | Issue Status | [default to 'all']
+**issue_labels** | **str** | Issue Labels. Example: Optionally filter by label. E.g. bug | [optional]
+**max_items** | **float** | Max Items. Example: Enter maximum number of items to fetch | [default to 1000]
+**created_after** | **date** | Created After. Filter for items created after this date. Example: Enter a date: Example 2012-12-31 | [optional]
+**start_date** | **date** | Start Date. Include meetings from this date forward. Example: Enter a date: Example 2023-12-31 |
+**end_date** | **date** | End Date. Include meetings up to this date only. Example: Enter a date: Example 2023-12-31 | [optional]
+**title_filter_type** | **str** | | [default to 'AND']
+**title_filter** | **str** | Title Filter. Only include meetings with this text in the title. Example: Enter meeting title | [optional]
+**participant_filter_type** | **str** | | [default to 'AND']
+**participant_filter** | **str** | Participant's Email Filter. Include meetings where these participants were invited. Example: Enter participant email | [optional]
+**max_meetings** | **float** | Max Meetings. Enter -1 for all available meetings, or specify a limit. Example: Enter maximum number of meetings to retrieve. (-1 for all) | [optional] [default to -1]
+
+## Example
+
+```python
+from vectorize_client.models.source_connector_input_config import SourceConnectorInputConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SourceConnectorInputConfig from a JSON string
+source_connector_input_config_instance = SourceConnectorInputConfig.from_json(json)
+# print the JSON string representation of the object
+print(SourceConnectorInputConfig.to_json())
+
+# convert the object into a dict
+source_connector_input_config_dict = source_connector_input_config_instance.to_dict()
+# create an instance of SourceConnectorInputConfig from a dict
+source_connector_input_config_from_dict = SourceConnectorInputConfig.from_dict(source_connector_input_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SourceConnectorSchema.md b/docs/SourceConnectorSchema.md
new file mode 100644
index 0000000..44f7835
--- /dev/null
+++ b/docs/SourceConnectorSchema.md
@@ -0,0 +1,31 @@
+# SourceConnectorSchema
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | |
+**type** | [**SourceConnectorType**](SourceConnectorType.md) | |
+**config** | **Dict[str, Optional[object]]** | |
+
+## Example
+
+```python
+from vectorize_client.models.source_connector_schema import SourceConnectorSchema
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of SourceConnectorSchema from a JSON string
+source_connector_schema_instance = SourceConnectorSchema.from_json(json)
+# print the JSON string representation of the object
+print(SourceConnectorSchema.to_json())
+
+# convert the object into a dict
+source_connector_schema_dict = source_connector_schema_instance.to_dict()
+# create an instance of SourceConnectorSchema from a dict
+source_connector_schema_from_dict = SourceConnectorSchema.from_dict(source_connector_schema_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/SourceConnectorType.md b/docs/SourceConnectorType.md
new file mode 100644
index 0000000..65739cf
--- /dev/null
+++ b/docs/SourceConnectorType.md
@@ -0,0 +1,56 @@
+# SourceConnectorType
+
+
+## Enum
+
+* `AWS_S3` (value: `'AWS_S3'`)
+
+* `AZURE_BLOB` (value: `'AZURE_BLOB'`)
+
+* `CONFLUENCE` (value: `'CONFLUENCE'`)
+
+* `DISCORD` (value: `'DISCORD'`)
+
+* `DROPBOX` (value: `'DROPBOX'`)
+
+* `DROPBOX_OAUTH` (value: `'DROPBOX_OAUTH'`)
+
+* `DROPBOX_OAUTH_MULTI` (value: `'DROPBOX_OAUTH_MULTI'`)
+
+* `DROPBOX_OAUTH_MULTI_CUSTOM` (value: `'DROPBOX_OAUTH_MULTI_CUSTOM'`)
+
+* `GOOGLE_DRIVE_OAUTH` (value: `'GOOGLE_DRIVE_OAUTH'`)
+
+* `GOOGLE_DRIVE` (value: `'GOOGLE_DRIVE'`)
+
+* `GOOGLE_DRIVE_OAUTH_MULTI` (value: `'GOOGLE_DRIVE_OAUTH_MULTI'`)
+
+* `GOOGLE_DRIVE_OAUTH_MULTI_CUSTOM` (value: `'GOOGLE_DRIVE_OAUTH_MULTI_CUSTOM'`)
+
+* `FIRECRAWL` (value: `'FIRECRAWL'`)
+
+* `GCS` (value: `'GCS'`)
+
+* `INTERCOM` (value: `'INTERCOM'`)
+
+* `NOTION` (value: `'NOTION'`)
+
+* `NOTION_OAUTH_MULTI` (value: `'NOTION_OAUTH_MULTI'`)
+
+* `NOTION_OAUTH_MULTI_CUSTOM` (value: `'NOTION_OAUTH_MULTI_CUSTOM'`)
+
+* `ONE_DRIVE` (value: `'ONE_DRIVE'`)
+
+* `SHAREPOINT` (value: `'SHAREPOINT'`)
+
+* `WEB_CRAWLER` (value: `'WEB_CRAWLER'`)
+
+* `FILE_UPLOAD` (value: `'FILE_UPLOAD'`)
+
+* `GITHUB` (value: `'GITHUB'`)
+
+* `FIREFLIES` (value: `'FIREFLIES'`)
+
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartDeepResearchRequest.md b/docs/StartDeepResearchRequest.md
new file mode 100644
index 0000000..918c347
--- /dev/null
+++ b/docs/StartDeepResearchRequest.md
@@ -0,0 +1,32 @@
+# StartDeepResearchRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**query** | **str** | |
+**web_search** | **bool** | | [optional] [default to False]
+**var_schema** | **str** | | [optional]
+**n8n** | [**N8NConfig**](N8NConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.start_deep_research_request import StartDeepResearchRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartDeepResearchRequest from a JSON string
+start_deep_research_request_instance = StartDeepResearchRequest.from_json(json)
+# print the JSON string representation of the object
+print(StartDeepResearchRequest.to_json())
+
+# convert the object into a dict
+start_deep_research_request_dict = start_deep_research_request_instance.to_dict()
+# create an instance of StartDeepResearchRequest from a dict
+start_deep_research_request_from_dict = StartDeepResearchRequest.from_dict(start_deep_research_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartDeepResearchResponse.md b/docs/StartDeepResearchResponse.md
new file mode 100644
index 0000000..7130a15
--- /dev/null
+++ b/docs/StartDeepResearchResponse.md
@@ -0,0 +1,29 @@
+# StartDeepResearchResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**research_id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.start_deep_research_response import StartDeepResearchResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartDeepResearchResponse from a JSON string
+start_deep_research_response_instance = StartDeepResearchResponse.from_json(json)
+# print the JSON string representation of the object
+print(StartDeepResearchResponse.to_json())
+
+# convert the object into a dict
+start_deep_research_response_dict = start_deep_research_response_instance.to_dict()
+# create an instance of StartDeepResearchResponse from a dict
+start_deep_research_response_from_dict = StartDeepResearchResponse.from_dict(start_deep_research_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartExtractionRequest.md b/docs/StartExtractionRequest.md
new file mode 100644
index 0000000..237b1d3
--- /dev/null
+++ b/docs/StartExtractionRequest.md
@@ -0,0 +1,33 @@
+# StartExtractionRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_id** | **str** | |
+**type** | [**ExtractionType**](ExtractionType.md) | | [optional] [default to ExtractionType.IRIS]
+**chunking_strategy** | [**ExtractionChunkingStrategy**](ExtractionChunkingStrategy.md) | | [optional] [default to ExtractionChunkingStrategy.MARKDOWN]
+**chunk_size** | **float** | | [optional] [default to 256]
+**metadata** | [**MetadataExtractionStrategy**](MetadataExtractionStrategy.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.start_extraction_request import StartExtractionRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartExtractionRequest from a JSON string
+start_extraction_request_instance = StartExtractionRequest.from_json(json)
+# print the JSON string representation of the object
+print(StartExtractionRequest.to_json())
+
+# convert the object into a dict
+start_extraction_request_dict = start_extraction_request_instance.to_dict()
+# create an instance of StartExtractionRequest from a dict
+start_extraction_request_from_dict = StartExtractionRequest.from_dict(start_extraction_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartExtractionResponse.md b/docs/StartExtractionResponse.md
new file mode 100644
index 0000000..72aeeed
--- /dev/null
+++ b/docs/StartExtractionResponse.md
@@ -0,0 +1,30 @@
+# StartExtractionResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**extraction_id** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.start_extraction_response import StartExtractionResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartExtractionResponse from a JSON string
+start_extraction_response_instance = StartExtractionResponse.from_json(json)
+# print the JSON string representation of the object
+print(StartExtractionResponse.to_json())
+
+# convert the object into a dict
+start_extraction_response_dict = start_extraction_response_instance.to_dict()
+# create an instance of StartExtractionResponse from a dict
+start_extraction_response_from_dict = StartExtractionResponse.from_dict(start_extraction_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartFileUploadRequest.md b/docs/StartFileUploadRequest.md
new file mode 100644
index 0000000..221e641
--- /dev/null
+++ b/docs/StartFileUploadRequest.md
@@ -0,0 +1,30 @@
+# StartFileUploadRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**content_type** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.start_file_upload_request import StartFileUploadRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartFileUploadRequest from a JSON string
+start_file_upload_request_instance = StartFileUploadRequest.from_json(json)
+# print the JSON string representation of the object
+print(StartFileUploadRequest.to_json())
+
+# convert the object into a dict
+start_file_upload_request_dict = start_file_upload_request_instance.to_dict()
+# create an instance of StartFileUploadRequest from a dict
+start_file_upload_request_from_dict = StartFileUploadRequest.from_dict(start_file_upload_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartFileUploadResponse.md b/docs/StartFileUploadResponse.md
new file mode 100644
index 0000000..d321ab2
--- /dev/null
+++ b/docs/StartFileUploadResponse.md
@@ -0,0 +1,30 @@
+# StartFileUploadResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**file_id** | **str** | |
+**upload_url** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.start_file_upload_response import StartFileUploadResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartFileUploadResponse from a JSON string
+start_file_upload_response_instance = StartFileUploadResponse.from_json(json)
+# print the JSON string representation of the object
+print(StartFileUploadResponse.to_json())
+
+# convert the object into a dict
+start_file_upload_response_dict = start_file_upload_response_instance.to_dict()
+# create an instance of StartFileUploadResponse from a dict
+start_file_upload_response_from_dict = StartFileUploadResponse.from_dict(start_file_upload_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartFileUploadToConnectorRequest.md b/docs/StartFileUploadToConnectorRequest.md
new file mode 100644
index 0000000..f6da191
--- /dev/null
+++ b/docs/StartFileUploadToConnectorRequest.md
@@ -0,0 +1,31 @@
+# StartFileUploadToConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | |
+**content_type** | **str** | |
+**metadata** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.start_file_upload_to_connector_request import StartFileUploadToConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartFileUploadToConnectorRequest from a JSON string
+start_file_upload_to_connector_request_instance = StartFileUploadToConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(StartFileUploadToConnectorRequest.to_json())
+
+# convert the object into a dict
+start_file_upload_to_connector_request_dict = start_file_upload_to_connector_request_instance.to_dict()
+# create an instance of StartFileUploadToConnectorRequest from a dict
+start_file_upload_to_connector_request_from_dict = StartFileUploadToConnectorRequest.from_dict(start_file_upload_to_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartFileUploadToConnectorResponse.md b/docs/StartFileUploadToConnectorResponse.md
new file mode 100644
index 0000000..62a380f
--- /dev/null
+++ b/docs/StartFileUploadToConnectorResponse.md
@@ -0,0 +1,29 @@
+# StartFileUploadToConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**upload_url** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.start_file_upload_to_connector_response import StartFileUploadToConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartFileUploadToConnectorResponse from a JSON string
+start_file_upload_to_connector_response_instance = StartFileUploadToConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(StartFileUploadToConnectorResponse.to_json())
+
+# convert the object into a dict
+start_file_upload_to_connector_response_dict = start_file_upload_to_connector_response_instance.to_dict()
+# create an instance of StartFileUploadToConnectorResponse from a dict
+start_file_upload_to_connector_response_from_dict = StartFileUploadToConnectorResponse.from_dict(start_file_upload_to_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StartPipelineResponse.md b/docs/StartPipelineResponse.md
new file mode 100644
index 0000000..d70222b
--- /dev/null
+++ b/docs/StartPipelineResponse.md
@@ -0,0 +1,29 @@
+# StartPipelineResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.start_pipeline_response import StartPipelineResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StartPipelineResponse from a JSON string
+start_pipeline_response_instance = StartPipelineResponse.from_json(json)
+# print the JSON string representation of the object
+print(StartPipelineResponse.to_json())
+
+# convert the object into a dict
+start_pipeline_response_dict = start_pipeline_response_instance.to_dict()
+# create an instance of StartPipelineResponse from a dict
+start_pipeline_response_from_dict = StartPipelineResponse.from_dict(start_pipeline_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/StopPipelineResponse.md b/docs/StopPipelineResponse.md
new file mode 100644
index 0000000..0917c99
--- /dev/null
+++ b/docs/StopPipelineResponse.md
@@ -0,0 +1,29 @@
+# StopPipelineResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.stop_pipeline_response import StopPipelineResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of StopPipelineResponse from a JSON string
+stop_pipeline_response_instance = StopPipelineResponse.from_json(json)
+# print the JSON string representation of the object
+print(StopPipelineResponse.to_json())
+
+# convert the object into a dict
+stop_pipeline_response_dict = stop_pipeline_response_instance.to_dict()
+# create an instance of StopPipelineResponse from a dict
+stop_pipeline_response_from_dict = StopPipelineResponse.from_dict(stop_pipeline_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Supabase.md b/docs/Supabase.md
new file mode 100644
index 0000000..fed78fa
--- /dev/null
+++ b/docs/Supabase.md
@@ -0,0 +1,31 @@
+# Supabase
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"SUPABASE\") |
+**config** | [**SUPABASEConfig**](SUPABASEConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.supabase import Supabase
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Supabase from a JSON string
+supabase_instance = Supabase.from_json(json)
+# print the JSON string representation of the object
+print(Supabase.to_json())
+
+# convert the object into a dict
+supabase_dict = supabase_instance.to_dict()
+# create an instance of Supabase from a dict
+supabase_from_dict = Supabase.from_dict(supabase_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Supabase1.md b/docs/Supabase1.md
new file mode 100644
index 0000000..2dce336
--- /dev/null
+++ b/docs/Supabase1.md
@@ -0,0 +1,29 @@
+# Supabase1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**SUPABASEConfig**](SUPABASEConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.supabase1 import Supabase1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Supabase1 from a JSON string
+supabase1_instance = Supabase1.from_json(json)
+# print the JSON string representation of the object
+print(Supabase1.to_json())
+
+# convert the object into a dict
+supabase1_dict = supabase1_instance.to_dict()
+# create an instance of Supabase1 from a dict
+supabase1_from_dict = Supabase1.from_dict(supabase1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Supabase2.md b/docs/Supabase2.md
new file mode 100644
index 0000000..bfa3372
--- /dev/null
+++ b/docs/Supabase2.md
@@ -0,0 +1,30 @@
+# Supabase2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"SUPABASE\") |
+
+## Example
+
+```python
+from vectorize_client.models.supabase2 import Supabase2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Supabase2 from a JSON string
+supabase2_instance = Supabase2.from_json(json)
+# print the JSON string representation of the object
+print(Supabase2.to_json())
+
+# convert the object into a dict
+supabase2_dict = supabase2_instance.to_dict()
+# create an instance of Supabase2 from a dict
+supabase2_from_dict = Supabase2.from_dict(supabase2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/TURBOPUFFERAuthConfig.md b/docs/TURBOPUFFERAuthConfig.md
new file mode 100644
index 0000000..c184b76
--- /dev/null
+++ b/docs/TURBOPUFFERAuthConfig.md
@@ -0,0 +1,31 @@
+# TURBOPUFFERAuthConfig
+
+Authentication configuration for Turbopuffer
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Turbopuffer integration |
+**api_key** | **str** | API Key. Example: Enter your API key |
+
+## Example
+
+```python
+from vectorize_client.models.turbopuffer_auth_config import TURBOPUFFERAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of TURBOPUFFERAuthConfig from a JSON string
+turbopuffer_auth_config_instance = TURBOPUFFERAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(TURBOPUFFERAuthConfig.to_json())
+
+# convert the object into a dict
+turbopuffer_auth_config_dict = turbopuffer_auth_config_instance.to_dict()
+# create an instance of TURBOPUFFERAuthConfig from a dict
+turbopuffer_auth_config_from_dict = TURBOPUFFERAuthConfig.from_dict(turbopuffer_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/TURBOPUFFERConfig.md b/docs/TURBOPUFFERConfig.md
new file mode 100644
index 0000000..049615f
--- /dev/null
+++ b/docs/TURBOPUFFERConfig.md
@@ -0,0 +1,30 @@
+# TURBOPUFFERConfig
+
+Configuration for Turbopuffer connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**namespace** | **str** | Namespace. Example: Enter namespace name |
+
+## Example
+
+```python
+from vectorize_client.models.turbopuffer_config import TURBOPUFFERConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of TURBOPUFFERConfig from a JSON string
+turbopuffer_config_instance = TURBOPUFFERConfig.from_json(json)
+# print the JSON string representation of the object
+print(TURBOPUFFERConfig.to_json())
+
+# convert the object into a dict
+turbopuffer_config_dict = turbopuffer_config_instance.to_dict()
+# create an instance of TURBOPUFFERConfig from a dict
+turbopuffer_config_from_dict = TURBOPUFFERConfig.from_dict(turbopuffer_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Turbopuffer.md b/docs/Turbopuffer.md
new file mode 100644
index 0000000..dce2d74
--- /dev/null
+++ b/docs/Turbopuffer.md
@@ -0,0 +1,31 @@
+# Turbopuffer
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"TURBOPUFFER\") |
+**config** | [**TURBOPUFFERConfig**](TURBOPUFFERConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.turbopuffer import Turbopuffer
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Turbopuffer from a JSON string
+turbopuffer_instance = Turbopuffer.from_json(json)
+# print the JSON string representation of the object
+print(Turbopuffer.to_json())
+
+# convert the object into a dict
+turbopuffer_dict = turbopuffer_instance.to_dict()
+# create an instance of Turbopuffer from a dict
+turbopuffer_from_dict = Turbopuffer.from_dict(turbopuffer_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Turbopuffer1.md b/docs/Turbopuffer1.md
new file mode 100644
index 0000000..ccda8f0
--- /dev/null
+++ b/docs/Turbopuffer1.md
@@ -0,0 +1,29 @@
+# Turbopuffer1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**TURBOPUFFERConfig**](TURBOPUFFERConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.turbopuffer1 import Turbopuffer1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Turbopuffer1 from a JSON string
+turbopuffer1_instance = Turbopuffer1.from_json(json)
+# print the JSON string representation of the object
+print(Turbopuffer1.to_json())
+
+# convert the object into a dict
+turbopuffer1_dict = turbopuffer1_instance.to_dict()
+# create an instance of Turbopuffer1 from a dict
+turbopuffer1_from_dict = Turbopuffer1.from_dict(turbopuffer1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Turbopuffer2.md b/docs/Turbopuffer2.md
new file mode 100644
index 0000000..d08c428
--- /dev/null
+++ b/docs/Turbopuffer2.md
@@ -0,0 +1,30 @@
+# Turbopuffer2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"TURBOPUFFER\") |
+
+## Example
+
+```python
+from vectorize_client.models.turbopuffer2 import Turbopuffer2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Turbopuffer2 from a JSON string
+turbopuffer2_instance = Turbopuffer2.from_json(json)
+# print the JSON string representation of the object
+print(Turbopuffer2.to_json())
+
+# convert the object into a dict
+turbopuffer2_dict = turbopuffer2_instance.to_dict()
+# create an instance of Turbopuffer2 from a dict
+turbopuffer2_from_dict = Turbopuffer2.from_dict(turbopuffer2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateAIPlatformConnectorRequest.md b/docs/UpdateAIPlatformConnectorRequest.md
new file mode 100644
index 0000000..2fad614
--- /dev/null
+++ b/docs/UpdateAIPlatformConnectorRequest.md
@@ -0,0 +1,29 @@
+# UpdateAiplatformConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**VOYAGEAuthConfig**](VOYAGEAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.update_aiplatform_connector_request import UpdateAiplatformConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateAiplatformConnectorRequest from a JSON string
+update_aiplatform_connector_request_instance = UpdateAiplatformConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(UpdateAiplatformConnectorRequest.to_json())
+
+# convert the object into a dict
+update_aiplatform_connector_request_dict = update_aiplatform_connector_request_instance.to_dict()
+# create an instance of UpdateAiplatformConnectorRequest from a dict
+update_aiplatform_connector_request_from_dict = UpdateAiplatformConnectorRequest.from_dict(update_aiplatform_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateAIPlatformConnectorResponse.md b/docs/UpdateAIPlatformConnectorResponse.md
new file mode 100644
index 0000000..5209503
--- /dev/null
+++ b/docs/UpdateAIPlatformConnectorResponse.md
@@ -0,0 +1,30 @@
+# UpdateAIPlatformConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**UpdatedAIPlatformConnectorData**](UpdatedAIPlatformConnectorData.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.update_ai_platform_connector_response import UpdateAIPlatformConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateAIPlatformConnectorResponse from a JSON string
+update_ai_platform_connector_response_instance = UpdateAIPlatformConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(UpdateAIPlatformConnectorResponse.to_json())
+
+# convert the object into a dict
+update_ai_platform_connector_response_dict = update_ai_platform_connector_response_instance.to_dict()
+# create an instance of UpdateAIPlatformConnectorResponse from a dict
+update_ai_platform_connector_response_from_dict = UpdateAIPlatformConnectorResponse.from_dict(update_ai_platform_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateDestinationConnectorRequest.md b/docs/UpdateDestinationConnectorRequest.md
new file mode 100644
index 0000000..7998678
--- /dev/null
+++ b/docs/UpdateDestinationConnectorRequest.md
@@ -0,0 +1,29 @@
+# UpdateDestinationConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**TURBOPUFFERConfig**](TURBOPUFFERConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.update_destination_connector_request import UpdateDestinationConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateDestinationConnectorRequest from a JSON string
+update_destination_connector_request_instance = UpdateDestinationConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(UpdateDestinationConnectorRequest.to_json())
+
+# convert the object into a dict
+update_destination_connector_request_dict = update_destination_connector_request_instance.to_dict()
+# create an instance of UpdateDestinationConnectorRequest from a dict
+update_destination_connector_request_from_dict = UpdateDestinationConnectorRequest.from_dict(update_destination_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateDestinationConnectorResponse.md b/docs/UpdateDestinationConnectorResponse.md
new file mode 100644
index 0000000..58c1725
--- /dev/null
+++ b/docs/UpdateDestinationConnectorResponse.md
@@ -0,0 +1,30 @@
+# UpdateDestinationConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**UpdatedDestinationConnectorData**](UpdatedDestinationConnectorData.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.update_destination_connector_response import UpdateDestinationConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateDestinationConnectorResponse from a JSON string
+update_destination_connector_response_instance = UpdateDestinationConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(UpdateDestinationConnectorResponse.to_json())
+
+# convert the object into a dict
+update_destination_connector_response_dict = update_destination_connector_response_instance.to_dict()
+# create an instance of UpdateDestinationConnectorResponse from a dict
+update_destination_connector_response_from_dict = UpdateDestinationConnectorResponse.from_dict(update_destination_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateSourceConnectorRequest.md b/docs/UpdateSourceConnectorRequest.md
new file mode 100644
index 0000000..47d6740
--- /dev/null
+++ b/docs/UpdateSourceConnectorRequest.md
@@ -0,0 +1,29 @@
+# UpdateSourceConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**FIREFLIESConfig**](FIREFLIESConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.update_source_connector_request import UpdateSourceConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateSourceConnectorRequest from a JSON string
+update_source_connector_request_instance = UpdateSourceConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(UpdateSourceConnectorRequest.to_json())
+
+# convert the object into a dict
+update_source_connector_request_dict = update_source_connector_request_instance.to_dict()
+# create an instance of UpdateSourceConnectorRequest from a dict
+update_source_connector_request_from_dict = UpdateSourceConnectorRequest.from_dict(update_source_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateSourceConnectorResponse.md b/docs/UpdateSourceConnectorResponse.md
new file mode 100644
index 0000000..2cb49dc
--- /dev/null
+++ b/docs/UpdateSourceConnectorResponse.md
@@ -0,0 +1,30 @@
+# UpdateSourceConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+**data** | [**UpdateSourceConnectorResponseData**](UpdateSourceConnectorResponseData.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.update_source_connector_response import UpdateSourceConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateSourceConnectorResponse from a JSON string
+update_source_connector_response_instance = UpdateSourceConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(UpdateSourceConnectorResponse.to_json())
+
+# convert the object into a dict
+update_source_connector_response_dict = update_source_connector_response_instance.to_dict()
+# create an instance of UpdateSourceConnectorResponse from a dict
+update_source_connector_response_from_dict = UpdateSourceConnectorResponse.from_dict(update_source_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateSourceConnectorResponseData.md b/docs/UpdateSourceConnectorResponseData.md
new file mode 100644
index 0000000..3dfc9c4
--- /dev/null
+++ b/docs/UpdateSourceConnectorResponseData.md
@@ -0,0 +1,30 @@
+# UpdateSourceConnectorResponseData
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**updated_connector** | [**SourceConnector**](SourceConnector.md) | |
+**pipeline_ids** | **List[str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.update_source_connector_response_data import UpdateSourceConnectorResponseData
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateSourceConnectorResponseData from a JSON string
+update_source_connector_response_data_instance = UpdateSourceConnectorResponseData.from_json(json)
+# print the JSON string representation of the object
+print(UpdateSourceConnectorResponseData.to_json())
+
+# convert the object into a dict
+update_source_connector_response_data_dict = update_source_connector_response_data_instance.to_dict()
+# create an instance of UpdateSourceConnectorResponseData from a dict
+update_source_connector_response_data_from_dict = UpdateSourceConnectorResponseData.from_dict(update_source_connector_response_data_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateUserInSourceConnectorRequest.md b/docs/UpdateUserInSourceConnectorRequest.md
new file mode 100644
index 0000000..6834124
--- /dev/null
+++ b/docs/UpdateUserInSourceConnectorRequest.md
@@ -0,0 +1,32 @@
+# UpdateUserInSourceConnectorRequest
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**user_id** | **str** | |
+**selected_files** | [**AddUserToSourceConnectorRequestSelectedFiles**](AddUserToSourceConnectorRequestSelectedFiles.md) | | [optional]
+**refresh_token** | **str** | | [optional]
+**access_token** | **str** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.update_user_in_source_connector_request import UpdateUserInSourceConnectorRequest
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateUserInSourceConnectorRequest from a JSON string
+update_user_in_source_connector_request_instance = UpdateUserInSourceConnectorRequest.from_json(json)
+# print the JSON string representation of the object
+print(UpdateUserInSourceConnectorRequest.to_json())
+
+# convert the object into a dict
+update_user_in_source_connector_request_dict = update_user_in_source_connector_request_instance.to_dict()
+# create an instance of UpdateUserInSourceConnectorRequest from a dict
+update_user_in_source_connector_request_from_dict = UpdateUserInSourceConnectorRequest.from_dict(update_user_in_source_connector_request_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdateUserInSourceConnectorResponse.md b/docs/UpdateUserInSourceConnectorResponse.md
new file mode 100644
index 0000000..642f1d9
--- /dev/null
+++ b/docs/UpdateUserInSourceConnectorResponse.md
@@ -0,0 +1,29 @@
+# UpdateUserInSourceConnectorResponse
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**message** | **str** | |
+
+## Example
+
+```python
+from vectorize_client.models.update_user_in_source_connector_response import UpdateUserInSourceConnectorResponse
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdateUserInSourceConnectorResponse from a JSON string
+update_user_in_source_connector_response_instance = UpdateUserInSourceConnectorResponse.from_json(json)
+# print the JSON string representation of the object
+print(UpdateUserInSourceConnectorResponse.to_json())
+
+# convert the object into a dict
+update_user_in_source_connector_response_dict = update_user_in_source_connector_response_instance.to_dict()
+# create an instance of UpdateUserInSourceConnectorResponse from a dict
+update_user_in_source_connector_response_from_dict = UpdateUserInSourceConnectorResponse.from_dict(update_user_in_source_connector_response_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdatedAIPlatformConnectorData.md b/docs/UpdatedAIPlatformConnectorData.md
new file mode 100644
index 0000000..ea27d35
--- /dev/null
+++ b/docs/UpdatedAIPlatformConnectorData.md
@@ -0,0 +1,30 @@
+# UpdatedAIPlatformConnectorData
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**updated_connector** | [**AIPlatform**](AIPlatform.md) | |
+**pipeline_ids** | **List[str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.updated_ai_platform_connector_data import UpdatedAIPlatformConnectorData
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdatedAIPlatformConnectorData from a JSON string
+updated_ai_platform_connector_data_instance = UpdatedAIPlatformConnectorData.from_json(json)
+# print the JSON string representation of the object
+print(UpdatedAIPlatformConnectorData.to_json())
+
+# convert the object into a dict
+updated_ai_platform_connector_data_dict = updated_ai_platform_connector_data_instance.to_dict()
+# create an instance of UpdatedAIPlatformConnectorData from a dict
+updated_ai_platform_connector_data_from_dict = UpdatedAIPlatformConnectorData.from_dict(updated_ai_platform_connector_data_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UpdatedDestinationConnectorData.md b/docs/UpdatedDestinationConnectorData.md
new file mode 100644
index 0000000..ce8ccd8
--- /dev/null
+++ b/docs/UpdatedDestinationConnectorData.md
@@ -0,0 +1,30 @@
+# UpdatedDestinationConnectorData
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**updated_connector** | [**DestinationConnector**](DestinationConnector.md) | |
+**pipeline_ids** | **List[str]** | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.updated_destination_connector_data import UpdatedDestinationConnectorData
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UpdatedDestinationConnectorData from a JSON string
+updated_destination_connector_data_instance = UpdatedDestinationConnectorData.from_json(json)
+# print the JSON string representation of the object
+print(UpdatedDestinationConnectorData.to_json())
+
+# convert the object into a dict
+updated_destination_connector_data_dict = updated_destination_connector_data_instance.to_dict()
+# create an instance of UpdatedDestinationConnectorData from a dict
+updated_destination_connector_data_from_dict = UpdatedDestinationConnectorData.from_dict(updated_destination_connector_data_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UploadFile.md b/docs/UploadFile.md
new file mode 100644
index 0000000..cbd0bc5
--- /dev/null
+++ b/docs/UploadFile.md
@@ -0,0 +1,34 @@
+# UploadFile
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**key** | **str** | |
+**name** | **str** | |
+**size** | **float** | |
+**extension** | **str** | | [optional]
+**last_modified** | **str** | |
+**metadata** | **Dict[str, str]** | |
+
+## Example
+
+```python
+from vectorize_client.models.upload_file import UploadFile
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of UploadFile from a JSON string
+upload_file_instance = UploadFile.from_json(json)
+# print the JSON string representation of the object
+print(UploadFile.to_json())
+
+# convert the object into a dict
+upload_file_dict = upload_file_instance.to_dict()
+# create an instance of UploadFile from a dict
+upload_file_from_dict = UploadFile.from_dict(upload_file_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/UploadsApi.md b/docs/UploadsApi.md
new file mode 100644
index 0000000..55ec4ad
--- /dev/null
+++ b/docs/UploadsApi.md
@@ -0,0 +1,263 @@
+# vectorize_client.UploadsApi
+
+All URIs are relative to *https://api.vectorize.io/v1*
+
+Method | HTTP request | Description
+------------- | ------------- | -------------
+[**delete_file_from_connector**](UploadsApi.md#delete_file_from_connector) | **DELETE** /org/{organizationId}/uploads/{connectorId}/files | Delete a file from a file upload connector
+[**get_upload_files_from_connector**](UploadsApi.md#get_upload_files_from_connector) | **GET** /org/{organizationId}/uploads/{connectorId}/files | Get uploaded files from a file upload connector
+[**start_file_upload_to_connector**](UploadsApi.md#start_file_upload_to_connector) | **PUT** /org/{organizationId}/uploads/{connectorId}/files | Upload a file to a file upload connector
+
+
+# **delete_file_from_connector**
+> DeleteFileResponse delete_file_from_connector(organization, connector_id)
+
+Delete a file from a file upload connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.delete_file_response import DeleteFileResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.UploadsApi(api_client)
+ organization = 'organization_example' # str |
+ connector_id = 'connector_id_example' # str |
+
+ try:
+ # Delete a file from a file upload connector
+ api_response = api_instance.delete_file_from_connector(organization, connector_id)
+ print("The response of UploadsApi->delete_file_from_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling UploadsApi->delete_file_from_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **connector_id** | **str**| |
+
+### Return type
+
+[**DeleteFileResponse**](DeleteFileResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | File deleted successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **get_upload_files_from_connector**
+> GetUploadFilesResponse get_upload_files_from_connector(organization, connector_id)
+
+Get uploaded files from a file upload connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.get_upload_files_response import GetUploadFilesResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.UploadsApi(api_client)
+ organization = 'organization_example' # str |
+ connector_id = 'connector_id_example' # str |
+
+ try:
+ # Get uploaded files from a file upload connector
+ api_response = api_instance.get_upload_files_from_connector(organization, connector_id)
+ print("The response of UploadsApi->get_upload_files_from_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling UploadsApi->get_upload_files_from_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **connector_id** | **str**| |
+
+### Return type
+
+[**GetUploadFilesResponse**](GetUploadFilesResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: Not defined
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | Files retrieved successfully | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
+# **start_file_upload_to_connector**
+> StartFileUploadToConnectorResponse start_file_upload_to_connector(organization, connector_id, start_file_upload_to_connector_request)
+
+Upload a file to a file upload connector
+
+### Example
+
+* Bearer (JWT) Authentication (bearerAuth):
+
+```python
+import vectorize_client
+from vectorize_client.models.start_file_upload_to_connector_request import StartFileUploadToConnectorRequest
+from vectorize_client.models.start_file_upload_to_connector_response import StartFileUploadToConnectorResponse
+from vectorize_client.rest import ApiException
+from pprint import pprint
+
+# Defining the host is optional and defaults to https://api.vectorize.io/v1
+# See configuration.py for a list of all supported configuration parameters.
+configuration = vectorize_client.Configuration(
+ host = "https://api.vectorize.io/v1"
+)
+
+# The client must configure the authentication and authorization parameters
+# in accordance with the API server security policy.
+# Examples for each auth method are provided below, use the example that
+# satisfies your auth use case.
+
+# Configure Bearer authorization (JWT): bearerAuth
+configuration = vectorize_client.Configuration(
+ access_token = os.environ["BEARER_TOKEN"]
+)
+
+# Enter a context with an instance of the API client
+with vectorize_client.ApiClient(configuration) as api_client:
+ # Create an instance of the API class
+ api_instance = vectorize_client.UploadsApi(api_client)
+ organization = 'organization_example' # str |
+ connector_id = 'connector_id_example' # str |
+ start_file_upload_to_connector_request = {"name":"My StartFileUploadToConnectorRequest","contentType":"document","metadata":"example-metadata"} # StartFileUploadToConnectorRequest |
+
+ try:
+ # Upload a file to a file upload connector
+ api_response = api_instance.start_file_upload_to_connector(organization, connector_id, start_file_upload_to_connector_request)
+ print("The response of UploadsApi->start_file_upload_to_connector:\n")
+ pprint(api_response)
+ except Exception as e:
+ print("Exception when calling UploadsApi->start_file_upload_to_connector: %s\n" % e)
+```
+
+
+
+### Parameters
+
+
+Name | Type | Description | Notes
+------------- | ------------- | ------------- | -------------
+ **organization** | **str**| |
+ **connector_id** | **str**| |
+ **start_file_upload_to_connector_request** | [**StartFileUploadToConnectorRequest**](StartFileUploadToConnectorRequest.md)| |
+
+### Return type
+
+[**StartFileUploadToConnectorResponse**](StartFileUploadToConnectorResponse.md)
+
+### Authorization
+
+[bearerAuth](../README.md#bearerAuth)
+
+### HTTP request headers
+
+ - **Content-Type**: application/json
+ - **Accept**: application/json
+
+### HTTP response details
+
+| Status code | Description | Response headers |
+|-------------|-------------|------------------|
+**200** | File ready to be uploaded | - |
+**400** | Invalid request | - |
+**401** | Unauthorized | - |
+**403** | Forbidden | - |
+**404** | Not found | - |
+**500** | Internal server error | - |
+
+[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
+
diff --git a/docs/VERTEXAuthConfig.md b/docs/VERTEXAuthConfig.md
new file mode 100644
index 0000000..afa1b6a
--- /dev/null
+++ b/docs/VERTEXAuthConfig.md
@@ -0,0 +1,32 @@
+# VERTEXAuthConfig
+
+Authentication configuration for Google Vertex AI
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Google Vertex AI integration |
+**key** | **str** | Service Account Json. Example: Enter the contents of your Google Vertex AI Service Account JSON file |
+**region** | **str** | Region. Example: Region Name, e.g. us-central1 |
+
+## Example
+
+```python
+from vectorize_client.models.vertex_auth_config import VERTEXAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of VERTEXAuthConfig from a JSON string
+vertex_auth_config_instance = VERTEXAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(VERTEXAuthConfig.to_json())
+
+# convert the object into a dict
+vertex_auth_config_dict = vertex_auth_config_instance.to_dict()
+# create an instance of VERTEXAuthConfig from a dict
+vertex_auth_config_from_dict = VERTEXAuthConfig.from_dict(vertex_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/VOYAGEAuthConfig.md b/docs/VOYAGEAuthConfig.md
new file mode 100644
index 0000000..b2884c7
--- /dev/null
+++ b/docs/VOYAGEAuthConfig.md
@@ -0,0 +1,31 @@
+# VOYAGEAuthConfig
+
+Authentication configuration for Voyage AI
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Voyage AI integration |
+**key** | **str** | API Key. Example: Enter your Voyage AI API Key |
+
+## Example
+
+```python
+from vectorize_client.models.voyage_auth_config import VOYAGEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of VOYAGEAuthConfig from a JSON string
+voyage_auth_config_instance = VOYAGEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(VOYAGEAuthConfig.to_json())
+
+# convert the object into a dict
+voyage_auth_config_dict = voyage_auth_config_instance.to_dict()
+# create an instance of VOYAGEAuthConfig from a dict
+voyage_auth_config_from_dict = VOYAGEAuthConfig.from_dict(voyage_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Vertex.md b/docs/Vertex.md
new file mode 100644
index 0000000..93bcd49
--- /dev/null
+++ b/docs/Vertex.md
@@ -0,0 +1,31 @@
+# Vertex
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"VERTEX\") |
+**config** | [**VERTEXAuthConfig**](VERTEXAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.vertex import Vertex
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Vertex from a JSON string
+vertex_instance = Vertex.from_json(json)
+# print the JSON string representation of the object
+print(Vertex.to_json())
+
+# convert the object into a dict
+vertex_dict = vertex_instance.to_dict()
+# create an instance of Vertex from a dict
+vertex_from_dict = Vertex.from_dict(vertex_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Vertex1.md b/docs/Vertex1.md
new file mode 100644
index 0000000..75cb7c4
--- /dev/null
+++ b/docs/Vertex1.md
@@ -0,0 +1,29 @@
+# Vertex1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**VERTEXAuthConfig**](VERTEXAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.vertex1 import Vertex1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Vertex1 from a JSON string
+vertex1_instance = Vertex1.from_json(json)
+# print the JSON string representation of the object
+print(Vertex1.to_json())
+
+# convert the object into a dict
+vertex1_dict = vertex1_instance.to_dict()
+# create an instance of Vertex1 from a dict
+vertex1_from_dict = Vertex1.from_dict(vertex1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Vertex2.md b/docs/Vertex2.md
new file mode 100644
index 0000000..08db100
--- /dev/null
+++ b/docs/Vertex2.md
@@ -0,0 +1,30 @@
+# Vertex2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"VERTEX\") |
+
+## Example
+
+```python
+from vectorize_client.models.vertex2 import Vertex2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Vertex2 from a JSON string
+vertex2_instance = Vertex2.from_json(json)
+# print the JSON string representation of the object
+print(Vertex2.to_json())
+
+# convert the object into a dict
+vertex2_dict = vertex2_instance.to_dict()
+# create an instance of Vertex2 from a dict
+vertex2_from_dict = Vertex2.from_dict(vertex2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Voyage.md b/docs/Voyage.md
new file mode 100644
index 0000000..9ff5813
--- /dev/null
+++ b/docs/Voyage.md
@@ -0,0 +1,31 @@
+# Voyage
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"VOYAGE\") |
+**config** | [**VOYAGEAuthConfig**](VOYAGEAuthConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.voyage import Voyage
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Voyage from a JSON string
+voyage_instance = Voyage.from_json(json)
+# print the JSON string representation of the object
+print(Voyage.to_json())
+
+# convert the object into a dict
+voyage_dict = voyage_instance.to_dict()
+# create an instance of Voyage from a dict
+voyage_from_dict = Voyage.from_dict(voyage_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Voyage1.md b/docs/Voyage1.md
new file mode 100644
index 0000000..86631df
--- /dev/null
+++ b/docs/Voyage1.md
@@ -0,0 +1,29 @@
+# Voyage1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**VOYAGEAuthConfig**](VOYAGEAuthConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.voyage1 import Voyage1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Voyage1 from a JSON string
+voyage1_instance = Voyage1.from_json(json)
+# print the JSON string representation of the object
+print(Voyage1.to_json())
+
+# convert the object into a dict
+voyage1_dict = voyage1_instance.to_dict()
+# create an instance of Voyage1 from a dict
+voyage1_from_dict = Voyage1.from_dict(voyage1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Voyage2.md b/docs/Voyage2.md
new file mode 100644
index 0000000..636a5ba
--- /dev/null
+++ b/docs/Voyage2.md
@@ -0,0 +1,30 @@
+# Voyage2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"VOYAGE\") |
+
+## Example
+
+```python
+from vectorize_client.models.voyage2 import Voyage2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Voyage2 from a JSON string
+voyage2_instance = Voyage2.from_json(json)
+# print the JSON string representation of the object
+print(Voyage2.to_json())
+
+# convert the object into a dict
+voyage2_dict = voyage2_instance.to_dict()
+# create an instance of Voyage2 from a dict
+voyage2_from_dict = Voyage2.from_dict(voyage2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WEAVIATEAuthConfig.md b/docs/WEAVIATEAuthConfig.md
new file mode 100644
index 0000000..333a5a9
--- /dev/null
+++ b/docs/WEAVIATEAuthConfig.md
@@ -0,0 +1,32 @@
+# WEAVIATEAuthConfig
+
+Authentication configuration for Weaviate
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name for your Weaviate integration |
+**host** | **str** | Endpoint. Example: Enter your Weaviate Cluster REST Endpoint |
+**api_key** | **str** | API Key. Example: Enter your API key |
+
+## Example
+
+```python
+from vectorize_client.models.weaviate_auth_config import WEAVIATEAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WEAVIATEAuthConfig from a JSON string
+weaviate_auth_config_instance = WEAVIATEAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(WEAVIATEAuthConfig.to_json())
+
+# convert the object into a dict
+weaviate_auth_config_dict = weaviate_auth_config_instance.to_dict()
+# create an instance of WEAVIATEAuthConfig from a dict
+weaviate_auth_config_from_dict = WEAVIATEAuthConfig.from_dict(weaviate_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WEAVIATEConfig.md b/docs/WEAVIATEConfig.md
new file mode 100644
index 0000000..62c6582
--- /dev/null
+++ b/docs/WEAVIATEConfig.md
@@ -0,0 +1,30 @@
+# WEAVIATEConfig
+
+Configuration for Weaviate connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**collection** | **str** | Collection Name. Example: Enter collection name |
+
+## Example
+
+```python
+from vectorize_client.models.weaviate_config import WEAVIATEConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WEAVIATEConfig from a JSON string
+weaviate_config_instance = WEAVIATEConfig.from_json(json)
+# print the JSON string representation of the object
+print(WEAVIATEConfig.to_json())
+
+# convert the object into a dict
+weaviate_config_dict = weaviate_config_instance.to_dict()
+# create an instance of WEAVIATEConfig from a dict
+weaviate_config_from_dict = WEAVIATEConfig.from_dict(weaviate_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WEBCRAWLERAuthConfig.md b/docs/WEBCRAWLERAuthConfig.md
new file mode 100644
index 0000000..561ca7a
--- /dev/null
+++ b/docs/WEBCRAWLERAuthConfig.md
@@ -0,0 +1,31 @@
+# WEBCRAWLERAuthConfig
+
+Authentication configuration for Web Crawler
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name. Example: Enter a descriptive name |
+**seed_urls** | **str** | Seed URL(s). Add one or more seed URLs to crawl. The crawler will start from these URLs and follow links to other pages.. Example: (e.g. https://example.com) |
+
+## Example
+
+```python
+from vectorize_client.models.webcrawler_auth_config import WEBCRAWLERAuthConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WEBCRAWLERAuthConfig from a JSON string
+webcrawler_auth_config_instance = WEBCRAWLERAuthConfig.from_json(json)
+# print the JSON string representation of the object
+print(WEBCRAWLERAuthConfig.to_json())
+
+# convert the object into a dict
+webcrawler_auth_config_dict = webcrawler_auth_config_instance.to_dict()
+# create an instance of WEBCRAWLERAuthConfig from a dict
+webcrawler_auth_config_from_dict = WEBCRAWLERAuthConfig.from_dict(webcrawler_auth_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WEBCRAWLERConfig.md b/docs/WEBCRAWLERConfig.md
new file mode 100644
index 0000000..e87a82a
--- /dev/null
+++ b/docs/WEBCRAWLERConfig.md
@@ -0,0 +1,36 @@
+# WEBCRAWLERConfig
+
+Configuration for Web Crawler connector
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**allowed_domains_opt** | **str** | Additional Allowed URLs or prefix(es). Add one or more allowed URLs or URL prefixes. The crawler will read URLs that match these patterns in addition to the seed URL(s).. Example: (e.g. https://docs.example.com) | [optional]
+**forbidden_paths** | **str** | Forbidden Paths. Example: Enter forbidden paths (e.g. /admin) | [optional]
+**min_time_between_requests** | **float** | Throttle (ms). Example: Enter minimum time between requests in milliseconds | [optional] [default to 500]
+**max_error_count** | **float** | Max Error Count. Example: Enter maximum error count | [optional] [default to 5]
+**max_urls** | **float** | Max URLs. Example: Enter maximum number of URLs to crawl | [optional] [default to 1000]
+**max_depth** | **float** | Max Depth. Example: Enter maximum crawl depth | [optional] [default to 50]
+**reindex_interval_seconds** | **float** | Reindex Interval (seconds). Example: Enter reindex interval in seconds | [optional] [default to 3600]
+
+## Example
+
+```python
+from vectorize_client.models.webcrawler_config import WEBCRAWLERConfig
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WEBCRAWLERConfig from a JSON string
+webcrawler_config_instance = WEBCRAWLERConfig.from_json(json)
+# print the JSON string representation of the object
+print(WEBCRAWLERConfig.to_json())
+
+# convert the object into a dict
+webcrawler_config_dict = webcrawler_config_instance.to_dict()
+# create an instance of WEBCRAWLERConfig from a dict
+webcrawler_config_from_dict = WEBCRAWLERConfig.from_dict(webcrawler_config_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Weaviate.md b/docs/Weaviate.md
new file mode 100644
index 0000000..f546d25
--- /dev/null
+++ b/docs/Weaviate.md
@@ -0,0 +1,31 @@
+# Weaviate
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"WEAVIATE\") |
+**config** | [**WEAVIATEConfig**](WEAVIATEConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.weaviate import Weaviate
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Weaviate from a JSON string
+weaviate_instance = Weaviate.from_json(json)
+# print the JSON string representation of the object
+print(Weaviate.to_json())
+
+# convert the object into a dict
+weaviate_dict = weaviate_instance.to_dict()
+# create an instance of Weaviate from a dict
+weaviate_from_dict = Weaviate.from_dict(weaviate_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Weaviate1.md b/docs/Weaviate1.md
new file mode 100644
index 0000000..2c6c0f5
--- /dev/null
+++ b/docs/Weaviate1.md
@@ -0,0 +1,29 @@
+# Weaviate1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**WEAVIATEConfig**](WEAVIATEConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.weaviate1 import Weaviate1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Weaviate1 from a JSON string
+weaviate1_instance = Weaviate1.from_json(json)
+# print the JSON string representation of the object
+print(Weaviate1.to_json())
+
+# convert the object into a dict
+weaviate1_dict = weaviate1_instance.to_dict()
+# create an instance of Weaviate1 from a dict
+weaviate1_from_dict = Weaviate1.from_dict(weaviate1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/Weaviate2.md b/docs/Weaviate2.md
new file mode 100644
index 0000000..7a92014
--- /dev/null
+++ b/docs/Weaviate2.md
@@ -0,0 +1,30 @@
+# Weaviate2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"WEAVIATE\") |
+
+## Example
+
+```python
+from vectorize_client.models.weaviate2 import Weaviate2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of Weaviate2 from a JSON string
+weaviate2_instance = Weaviate2.from_json(json)
+# print the JSON string representation of the object
+print(Weaviate2.to_json())
+
+# convert the object into a dict
+weaviate2_dict = weaviate2_instance.to_dict()
+# create an instance of Weaviate2 from a dict
+weaviate2_from_dict = Weaviate2.from_dict(weaviate2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WebCrawler.md b/docs/WebCrawler.md
new file mode 100644
index 0000000..a928e44
--- /dev/null
+++ b/docs/WebCrawler.md
@@ -0,0 +1,31 @@
+# WebCrawler
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**name** | **str** | Name of the connector |
+**type** | **str** | Connector type (must be \"WEB_CRAWLER\") |
+**config** | [**WEBCRAWLERConfig**](WEBCRAWLERConfig.md) | |
+
+## Example
+
+```python
+from vectorize_client.models.web_crawler import WebCrawler
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WebCrawler from a JSON string
+web_crawler_instance = WebCrawler.from_json(json)
+# print the JSON string representation of the object
+print(WebCrawler.to_json())
+
+# convert the object into a dict
+web_crawler_dict = web_crawler_instance.to_dict()
+# create an instance of WebCrawler from a dict
+web_crawler_from_dict = WebCrawler.from_dict(web_crawler_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WebCrawler1.md b/docs/WebCrawler1.md
new file mode 100644
index 0000000..44dac8e
--- /dev/null
+++ b/docs/WebCrawler1.md
@@ -0,0 +1,29 @@
+# WebCrawler1
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**config** | [**WEBCRAWLERConfig**](WEBCRAWLERConfig.md) | | [optional]
+
+## Example
+
+```python
+from vectorize_client.models.web_crawler1 import WebCrawler1
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WebCrawler1 from a JSON string
+web_crawler1_instance = WebCrawler1.from_json(json)
+# print the JSON string representation of the object
+print(WebCrawler1.to_json())
+
+# convert the object into a dict
+web_crawler1_dict = web_crawler1_instance.to_dict()
+# create an instance of WebCrawler1 from a dict
+web_crawler1_from_dict = WebCrawler1.from_dict(web_crawler1_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/docs/WebCrawler2.md b/docs/WebCrawler2.md
new file mode 100644
index 0000000..ba9988a
--- /dev/null
+++ b/docs/WebCrawler2.md
@@ -0,0 +1,30 @@
+# WebCrawler2
+
+
+## Properties
+
+Name | Type | Description | Notes
+------------ | ------------- | ------------- | -------------
+**id** | **str** | Unique identifier for the connector |
+**type** | **str** | Connector type (must be \"WEB_CRAWLER\") |
+
+## Example
+
+```python
+from vectorize_client.models.web_crawler2 import WebCrawler2
+
+# TODO update the JSON string below
+json = "{}"
+# create an instance of WebCrawler2 from a JSON string
+web_crawler2_instance = WebCrawler2.from_json(json)
+# print the JSON string representation of the object
+print(WebCrawler2.to_json())
+
+# convert the object into a dict
+web_crawler2_dict = web_crawler2_instance.to_dict()
+# create an instance of WebCrawler2 from a dict
+web_crawler2_from_dict = WebCrawler2.from_dict(web_crawler2_dict)
+```
+[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
+
+
diff --git a/git_push.sh b/git_push.sh
new file mode 100644
index 0000000..f53a75d
--- /dev/null
+++ b/git_push.sh
@@ -0,0 +1,57 @@
+#!/bin/sh
+# ref: https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/
+#
+# Usage example: /bin/sh ./git_push.sh wing328 openapi-petstore-perl "minor update" "gitlab.com"
+
+git_user_id=$1
+git_repo_id=$2
+release_note=$3
+git_host=$4
+
+if [ "$git_host" = "" ]; then
+ git_host="github.com"
+ echo "[INFO] No command line input provided. Set \$git_host to $git_host"
+fi
+
+if [ "$git_user_id" = "" ]; then
+ git_user_id="GIT_USER_ID"
+ echo "[INFO] No command line input provided. Set \$git_user_id to $git_user_id"
+fi
+
+if [ "$git_repo_id" = "" ]; then
+ git_repo_id="GIT_REPO_ID"
+ echo "[INFO] No command line input provided. Set \$git_repo_id to $git_repo_id"
+fi
+
+if [ "$release_note" = "" ]; then
+ release_note="Minor update"
+ echo "[INFO] No command line input provided. Set \$release_note to $release_note"
+fi
+
+# Initialize the local directory as a Git repository
+git init
+
+# Adds the files in the local repository and stages them for commit.
+git add .
+
+# Commits the tracked changes and prepares them to be pushed to a remote repository.
+git commit -m "$release_note"
+
+# Sets the new remote
+git_remote=$(git remote)
+if [ "$git_remote" = "" ]; then # git remote not defined
+
+ if [ "$GIT_TOKEN" = "" ]; then
+ echo "[INFO] \$GIT_TOKEN (environment variable) is not set. Using the git credential in your environment."
+ git remote add origin https://${git_host}/${git_user_id}/${git_repo_id}.git
+ else
+ git remote add origin https://${git_user_id}:"${GIT_TOKEN}"@${git_host}/${git_user_id}/${git_repo_id}.git
+ fi
+
+fi
+
+git pull origin master
+
+# Pushes (Forces) the changes in the local repository up to the remote repository
+echo "Git pushing to https://${git_host}/${git_user_id}/${git_repo_id}.git"
+git push origin master 2>&1 | grep -v 'To https'
diff --git a/node_modules/.bin/markdown-it b/node_modules/.bin/markdown-it
new file mode 120000
index 0000000..8a64108
--- /dev/null
+++ b/node_modules/.bin/markdown-it
@@ -0,0 +1 @@
+../markdown-it/bin/markdown-it.mjs
\ No newline at end of file
diff --git a/node_modules/.bin/tsc b/node_modules/.bin/tsc
new file mode 120000
index 0000000..0863208
--- /dev/null
+++ b/node_modules/.bin/tsc
@@ -0,0 +1 @@
+../typescript/bin/tsc
\ No newline at end of file
diff --git a/node_modules/.bin/tsserver b/node_modules/.bin/tsserver
new file mode 120000
index 0000000..f8f8f1a
--- /dev/null
+++ b/node_modules/.bin/tsserver
@@ -0,0 +1 @@
+../typescript/bin/tsserver
\ No newline at end of file
diff --git a/node_modules/.bin/typedoc b/node_modules/.bin/typedoc
new file mode 120000
index 0000000..8303b02
--- /dev/null
+++ b/node_modules/.bin/typedoc
@@ -0,0 +1 @@
+../typedoc/bin/typedoc
\ No newline at end of file
diff --git a/node_modules/.bin/yaml b/node_modules/.bin/yaml
new file mode 120000
index 0000000..0368324
--- /dev/null
+++ b/node_modules/.bin/yaml
@@ -0,0 +1 @@
+../yaml/bin.mjs
\ No newline at end of file
diff --git a/node_modules/.package-lock.json b/node_modules/.package-lock.json
new file mode 100644
index 0000000..abc510a
--- /dev/null
+++ b/node_modules/.package-lock.json
@@ -0,0 +1,241 @@
+{
+ "name": "vectorize-clients",
+ "version": "0.1.0",
+ "lockfileVersion": 3,
+ "requires": true,
+ "packages": {
+ "node_modules/@gerrit0/mini-shiki": {
+ "version": "1.27.2",
+ "resolved": "https://registry.npmjs.org/@gerrit0/mini-shiki/-/mini-shiki-1.27.2.tgz",
+ "integrity": "sha512-GeWyHz8ao2gBiUW4OJnQDxXQnFgZQwwQk05t/CVVgNBN7/rK8XZ7xY6YhLVv9tH3VppWWmr9DCl3MwemB/i+Og==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@shikijs/engine-oniguruma": "^1.27.2",
+ "@shikijs/types": "^1.27.2",
+ "@shikijs/vscode-textmate": "^10.0.1"
+ }
+ },
+ "node_modules/@iarna/toml": {
+ "version": "2.2.5",
+ "resolved": "https://registry.npmjs.org/@iarna/toml/-/toml-2.2.5.tgz",
+ "integrity": "sha512-trnsAYxU3xnS1gPHPyU961coFyLkh4gAD/0zQ5mymY4yOZ+CYvsPqUbOFSw0aDM4y0tV7tiFxL/1XfXPNC6IPg==",
+ "license": "ISC"
+ },
+ "node_modules/@shikijs/engine-oniguruma": {
+ "version": "1.29.2",
+ "resolved": "https://registry.npmjs.org/@shikijs/engine-oniguruma/-/engine-oniguruma-1.29.2.tgz",
+ "integrity": "sha512-7iiOx3SG8+g1MnlzZVDYiaeHe7Ez2Kf2HrJzdmGwkRisT7r4rak0e655AcM/tF9JG/kg5fMNYlLLKglbN7gBqA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@shikijs/types": "1.29.2",
+ "@shikijs/vscode-textmate": "^10.0.1"
+ }
+ },
+ "node_modules/@shikijs/types": {
+ "version": "1.29.2",
+ "resolved": "https://registry.npmjs.org/@shikijs/types/-/types-1.29.2.tgz",
+ "integrity": "sha512-VJjK0eIijTZf0QSTODEXCqinjBn0joAHQ+aPSBzrv4O2d/QSbsMw+ZeSRx03kV34Hy7NzUvV/7NqfYGRLrASmw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@shikijs/vscode-textmate": "^10.0.1",
+ "@types/hast": "^3.0.4"
+ }
+ },
+ "node_modules/@shikijs/vscode-textmate": {
+ "version": "10.0.2",
+ "resolved": "https://registry.npmjs.org/@shikijs/vscode-textmate/-/vscode-textmate-10.0.2.tgz",
+ "integrity": "sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/@types/hast": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/@types/hast/-/hast-3.0.4.tgz",
+ "integrity": "sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "@types/unist": "*"
+ }
+ },
+ "node_modules/@types/unist": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/@types/unist/-/unist-3.0.3.tgz",
+ "integrity": "sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/argparse": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz",
+ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==",
+ "dev": true,
+ "license": "Python-2.0"
+ },
+ "node_modules/balanced-match": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz",
+ "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/brace-expansion": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.1.tgz",
+ "integrity": "sha512-XnAIvQ8eM+kC6aULx6wuQiwVsnzsi9d3WxzV3FpWTGA19F621kwdbsAcFKXgKUHZWsy+mY6iL1sHTxWEFCytDA==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "balanced-match": "^1.0.0"
+ }
+ },
+ "node_modules/entities": {
+ "version": "4.5.0",
+ "resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz",
+ "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==",
+ "dev": true,
+ "license": "BSD-2-Clause",
+ "engines": {
+ "node": ">=0.12"
+ },
+ "funding": {
+ "url": "https://github.com/fb55/entities?sponsor=1"
+ }
+ },
+ "node_modules/linkify-it": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/linkify-it/-/linkify-it-5.0.0.tgz",
+ "integrity": "sha512-5aHCbzQRADcdP+ATqnDuhhJ/MRIqDkZX5pyjFHRRysS8vZ5AbqGEoFIb6pYHPZ+L/OC2Lc+xT8uHVVR5CAK/wQ==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "uc.micro": "^2.0.0"
+ }
+ },
+ "node_modules/lunr": {
+ "version": "2.3.9",
+ "resolved": "https://registry.npmjs.org/lunr/-/lunr-2.3.9.tgz",
+ "integrity": "sha512-zTU3DaZaF3Rt9rhN3uBMGQD3dD2/vFQqnvZCDv4dl5iOzq2IZQqTxu90r4E5J+nP70J3ilqVCrbho2eWaeW8Ow==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/markdown-it": {
+ "version": "14.1.0",
+ "resolved": "https://registry.npmjs.org/markdown-it/-/markdown-it-14.1.0.tgz",
+ "integrity": "sha512-a54IwgWPaeBCAAsv13YgmALOF1elABB08FxO9i+r4VFk5Vl4pKokRPeX8u5TCgSsPi6ec1otfLjdOpVcgbpshg==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "argparse": "^2.0.1",
+ "entities": "^4.4.0",
+ "linkify-it": "^5.0.0",
+ "mdurl": "^2.0.0",
+ "punycode.js": "^2.3.1",
+ "uc.micro": "^2.1.0"
+ },
+ "bin": {
+ "markdown-it": "bin/markdown-it.mjs"
+ }
+ },
+ "node_modules/mdurl": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-2.0.0.tgz",
+ "integrity": "sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/minimatch": {
+ "version": "9.0.5",
+ "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz",
+ "integrity": "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==",
+ "dev": true,
+ "license": "ISC",
+ "dependencies": {
+ "brace-expansion": "^2.0.1"
+ },
+ "engines": {
+ "node": ">=16 || 14 >=14.17"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/isaacs"
+ }
+ },
+ "node_modules/punycode.js": {
+ "version": "2.3.1",
+ "resolved": "https://registry.npmjs.org/punycode.js/-/punycode.js-2.3.1.tgz",
+ "integrity": "sha512-uxFIHU0YlHYhDQtV4R9J6a52SLx28BCjT+4ieh7IGbgwVJWO+km431c4yRlREUAsAmt/uMjQUyQHNEPf0M39CA==",
+ "dev": true,
+ "license": "MIT",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/toml": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/toml/-/toml-3.0.0.tgz",
+ "integrity": "sha512-y/mWCZinnvxjTKYhJ+pYxwD0mRLVvOtdS2Awbgxln6iEnt4rk0yBxeSBHkGJcPucRiG0e55mwWp+g/05rsrd6w==",
+ "license": "MIT"
+ },
+ "node_modules/typedoc": {
+ "version": "0.27.8",
+ "resolved": "https://registry.npmjs.org/typedoc/-/typedoc-0.27.8.tgz",
+ "integrity": "sha512-q0/2TUunNEDmWkn23ULKGXieK8cgGuAmBUXC/HcZ/rgzMI9Yr4Nq3in1K1vT1NZ9zx6M78yTk3kmIPbwJgK5KA==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "dependencies": {
+ "@gerrit0/mini-shiki": "^1.24.0",
+ "lunr": "^2.3.9",
+ "markdown-it": "^14.1.0",
+ "minimatch": "^9.0.5",
+ "yaml": "^2.6.1"
+ },
+ "bin": {
+ "typedoc": "bin/typedoc"
+ },
+ "engines": {
+ "node": ">= 18"
+ },
+ "peerDependencies": {
+ "typescript": "5.0.x || 5.1.x || 5.2.x || 5.3.x || 5.4.x || 5.5.x || 5.6.x || 5.7.x"
+ }
+ },
+ "node_modules/typescript": {
+ "version": "5.7.3",
+ "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.7.3.tgz",
+ "integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==",
+ "dev": true,
+ "license": "Apache-2.0",
+ "peer": true,
+ "bin": {
+ "tsc": "bin/tsc",
+ "tsserver": "bin/tsserver"
+ },
+ "engines": {
+ "node": ">=14.17"
+ }
+ },
+ "node_modules/uc.micro": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/uc.micro/-/uc.micro-2.1.0.tgz",
+ "integrity": "sha512-ARDJmphmdvUk6Glw7y9DQ2bFkKBHwQHLi2lsaH6PPmz/Ka9sFOBsBluozhDltWmnv9u/cF6Rt87znRTPV+yp/A==",
+ "dev": true,
+ "license": "MIT"
+ },
+ "node_modules/yaml": {
+ "version": "2.7.0",
+ "resolved": "https://registry.npmjs.org/yaml/-/yaml-2.7.0.tgz",
+ "integrity": "sha512-+hSoy/QHluxmC9kCIJyL/uyFmLmc+e5CFR5Wa+bpIhIj85LVb9ZH2nVnqrHoSvKogwODv0ClqZkmiSSaIH5LTA==",
+ "dev": true,
+ "license": "ISC",
+ "bin": {
+ "yaml": "bin.mjs"
+ },
+ "engines": {
+ "node": ">= 14"
+ }
+ }
+ }
+}
diff --git a/node_modules/@gerrit0/mini-shiki/CHANGELOG.md b/node_modules/@gerrit0/mini-shiki/CHANGELOG.md
new file mode 100644
index 0000000..9c8a67c
--- /dev/null
+++ b/node_modules/@gerrit0/mini-shiki/CHANGELOG.md
@@ -0,0 +1,53 @@
+# Changelog
+
+## v1.27.2 (2025-01-16)
+
+- Update to Shiki v1.27.2
+
+## v1.27.0 (2025-01-15)
+
+- Update to Shiki v1.27.0
+
+## v1.26.1 (2025-01-04)
+
+- Update to Shiki v1.26.1
+
+## v1.25.1 (2025-01-03)
+
+- Update to Shiki v1.25.1
+
+## v1.24.4 (2024-12-22)
+
+- Update to Shiki v1.24.4
+
+## v1.24.3 (2024-12-20)
+
+- Update to Shiki v1.24.3
+
+## v1.24.4 (2024-12-13)
+
+- Update to Shiki v1.24.2
+
+## v1.24.3 (2024-12-11)
+
+- Update to Shiki v1.24.2
+
+## v1.24.2 (2024-12-10)
+
+- Update to Shiki v1.24.1
+
+## v1.24.1 (2024-11-29)
+
+- Support `require` with Node's `--experimental-require-module` flag
+
+## v1.24.0 (2024-11-28)
+
+- Update to Shiki v1.24.0
+
+## v1.23.2 (2024-11-24)
+
+- Fix publish, include built source
+
+## v1.23.1 (2024-11-24)
+
+- Initial release, Shiki v1.23.1
diff --git a/node_modules/@gerrit0/mini-shiki/LICENSE b/node_modules/@gerrit0/mini-shiki/LICENSE
new file mode 100644
index 0000000..008c15d
--- /dev/null
+++ b/node_modules/@gerrit0/mini-shiki/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2024 Gerrit Birkeland
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/@gerrit0/mini-shiki/README.md b/node_modules/@gerrit0/mini-shiki/README.md
new file mode 100644
index 0000000..3364665
--- /dev/null
+++ b/node_modules/@gerrit0/mini-shiki/README.md
@@ -0,0 +1,54 @@
+# @gerrit0/mini-shiki
+
+This is a re-bundled version of [Shiki](https://shiki.style/) which strips out
+the dependencies which aren't necessary for [TypeDoc](https://typedoc.org/)'s usage.
+
+## Why?
+
+Compare Shiki's dependency tree:
+
+
+
+To this package's dependency tree:
+
+
+
+The Shiki maintainers [have declined](https://github.com/shikijs/shiki/issues/844) to split
+up the package in a way which makes it possible to avoid these dependencies when just relying
+on shikijs published packages.
+
+## Releases
+
+This package will be released and keep the same major/minor version numbers as Shiki.
+Patch versions will generally be the same as Shiki, but may differ if adjustments are
+necessary to fix compatibility issues.
+
+## ESM / CommonJS
+
+This package is ESM, but does not use top level await, so may be `require`d in
+Node 23, or Node 20.17+ with the `--experimental-require-module` flag.
+
+## Usage
+
+```js
+import {
+ codeToTokensWithThemes,
+ createShikiInternal,
+ createOnigurumaEngine,
+ bundledLanguages,
+ bundledThemes,
+ loadBuiltinWasm,
+} from "@gerrit0/mini-shiki";
+
+await loadBuiltinWasm();
+const shiki = await createShikiInternal({
+ engine: createOnigurumaEngine(),
+ langs: [bundledLanguages.typescript],
+ themes: [bundledThemes["light-plus"]],
+});
+
+const lines = codeToTokensWithThemes(shiki, "console.log('Hello world!')", {
+ themes: { light: "light-plus" },
+ lang: "typescript",
+});
+```
diff --git a/node_modules/@gerrit0/mini-shiki/package.json b/node_modules/@gerrit0/mini-shiki/package.json
new file mode 100644
index 0000000..53ce215
--- /dev/null
+++ b/node_modules/@gerrit0/mini-shiki/package.json
@@ -0,0 +1,45 @@
+{
+ "type": "module",
+ "license": "MIT",
+ "name": "@gerrit0/mini-shiki",
+ "version": "1.27.2",
+ "exports": {
+ ".": {
+ "types": "./dist/shiki.d.ts",
+ "default": "./dist/shiki.js"
+ },
+ "./onig.wasm": {
+ "import": "./dist/onig.wasm"
+ }
+ },
+ "repository": {
+ "type": "git",
+ "url": "git+https://github.com/Gerrit0/mini-shiki.git"
+ },
+ "scripts": {
+ "build": "./scripts/build.sh",
+ "test": "node --experimental-require-module --test"
+ },
+ "devDependencies": {
+ "@rollup/plugin-node-resolve": "15.3.0",
+ "@rollup/plugin-typescript": "12.1.1",
+ "@types/node": "^22.9.3",
+ "dts-bundle-generator": "^9.5.1",
+ "rollup": "4.27.4",
+ "rollup-plugin-dts": "^6.1.1",
+ "semver": "7.6.3",
+ "shiki": "^1.27.2"
+ },
+ "files": [
+ "static",
+ "dist",
+ "README.md",
+ "CHANGELOG.md",
+ "LICENSE"
+ ],
+ "dependencies": {
+ "@shikijs/engine-oniguruma": "^1.27.2",
+ "@shikijs/types": "^1.27.2",
+ "@shikijs/vscode-textmate": "^10.0.1"
+ }
+}
diff --git a/node_modules/@gerrit0/mini-shiki/static/mini-shiki-dependency-tree.svg b/node_modules/@gerrit0/mini-shiki/static/mini-shiki-dependency-tree.svg
new file mode 100644
index 0000000..71a902f
--- /dev/null
+++ b/node_modules/@gerrit0/mini-shiki/static/mini-shiki-dependency-tree.svg
@@ -0,0 +1,108 @@
+
\ No newline at end of file
diff --git a/node_modules/@gerrit0/mini-shiki/static/shiki-dependency-tree.svg b/node_modules/@gerrit0/mini-shiki/static/shiki-dependency-tree.svg
new file mode 100644
index 0000000..aec5e0f
--- /dev/null
+++ b/node_modules/@gerrit0/mini-shiki/static/shiki-dependency-tree.svg
@@ -0,0 +1,786 @@
+
diff --git a/node_modules/@iarna/toml/CHANGELOG.md b/node_modules/@iarna/toml/CHANGELOG.md
new file mode 100755
index 0000000..21964f9
--- /dev/null
+++ b/node_modules/@iarna/toml/CHANGELOG.md
@@ -0,0 +1,278 @@
+# 2.2.5
+
+* Docs: Updated benchmark results. Add fast-toml to result list. Improved benchmark layout.
+* Update @sgarciac/bombadil and @ltd/j-toml in benchmarks and compliance tests.
+* Dev: Some dev dep updates that shouldn't have any impact.
+
+# 2.2.4
+
+* Bug fix: Plain date literals (not datetime) immediately followed by another statement (no whitespace or blank line) would crash. Fixes [#19](https://github.com/iarna/iarna-toml/issues/19) and [#23](https://github.com/iarna/iarna-toml/issues/23), thank you [@arnau](https://github.com/arnau) and [@jschaf](https://github.com/jschaf) for reporting this!
+* Bug fix: Hex literals with lowercase Es would throw errors. (Thank you [@DaeCatt](https://github.com/DaeCatt) for this fix!) Fixed [#20](https://github.com/iarna/iarna-toml/issues/20)
+* Some minor doc tweaks
+* Added Node 12 and 13 to Travis. (Node 6 is failing there now, mysteriously. It works on my machine™, shipping anyway. 🙃)
+
+# 2.2.3
+
+This release just updates the spec compliance tests and benchmark data to
+better represent @ltd/j-toml.
+
+# 2.2.2
+
+## Fixes
+
+* Support parsing and stringifying objects with `__proto__` properties. ([@LongTengDao](https://github.com/LongTengDao))
+
+## Misc
+
+* Updates for spec compliance and benchmarking:
+ * @sgarciac/bombadil -> 2.1.0
+ * toml -> 3.0.0
+* Added spec compliance and benchmarking for:
+ * @ltd/j-toml
+
+# 2.2.1
+
+## Fixes
+
+* Fix bug where keys with names matching javascript Object methods would
+ error. Thanks [@LongTengDao](https://github.com/LongTengDao) for finding this!
+* Fix bug where a bundled version would fail if `util.inspect` wasn't
+ provided. This was supposed to be guarded against, but there was a bug in
+ the guard. Thanks [@agriffis](https://github.com/agriffis) for finding and fixing this!
+
+## Misc
+
+* Update the version of bombadil for spec compliance and benchmarking purposes to 2.0.0
+
+## Did you know?
+
+Node 6 and 8 are measurably slower than Node 6, 10 and 11, at least when it comes to parsing TOML!
+
+
+
+# 2.2.0
+
+## Features
+
+* Typescript: Lots of improvements to our type definitions, many many to
+ [@jorgegonzalez](https://github.com/jorgegonzalez) and [@momocow](https://github.com/momocow) for working through these.
+
+## Fixes
+
+* Very large integers (>52bit) are stored as BigInts on runtimes that
+ support them. BigInts are 128bits, but the TOML spec limits its integers
+ to 64bits. We now limit our integers to 64bits
+ as well.
+* Fix a bug in stringify where control characters were being emitted as unicode chars and not escape sequences.
+
+## Misc
+
+* Moved our spec tests out to an external repo
+* Improved the styling of the spec compliance comparison
+
+# 2.1.1
+
+## Fixes
+
+* Oops, type defs didn't end up in the tarball, ty [@jorgegonzalez](https://github.com/jorgegonzalez)‼
+
+# 2.1.0
+
+## Features
+
+* Types for typescript support, thank you [@momocow](https://github.com/momocow)!
+
+## Fixes
+
+* stringify: always strip invalid dates. This fixes a bug where an
+ invalid date in an inline array would not be removed and would instead
+ result in an error.
+* stringify: if an invalid type is found make sure it's thrown as an
+ error object. Previously the type name was, unhelpfully, being thrown.
+* stringify: Multiline strings ending in a quote would generate invalid TOML.
+* parse: Error if a signed integer has a leading zero, eg, `-01` or `+01`.
+* parse: Error if \_ appears at the end of the integer part of a float, eg `1_.0`. \_ is only valid between _digits_.
+
+## Fun
+
+* BurntSushi's comprehensive TOML 0.4.0 test suite is now used in addition to our existing test suite.
+* You can see exactly how the other JS TOML libraries stack up in testing
+ against both BurntSushi's tests and my own in the new
+ [TOML-SPEC-SUPPORT](TOML-SPEC-SUPPORT.md) doc.
+
+# 2.0.0
+
+With 2.0.0, @iarna/toml supports the TOML v0.5.0 specification. TOML 0.5.0
+brings some changes:
+
+* Delete characters (U+007F) are not allowed in plain strings. You can include them with
+ escaped unicode characters, eg `\u007f`.
+* Integers are specified as being 64bit unsigned values. These are
+ supported using `BigInt`s if you are using Node 10 or later.
+* Keys may be literal strings, that is, you can use single quoted strings to
+ quote key names, so the following is now valid:
+ 'a"b"c' = 123
+* The floating point values `nan`, `inf` and `-inf` are supported. The stringifier will no
+ longer strip NaN, Infinity and -Infinity, instead serializing them as these new values..
+* Datetimes can separate the date and time with a space instead of a T, so
+ `2017-12-01T00:00:00Z` can be written as `2017-12-01 00:00:00Z`.
+* Datetimes can be floating, that is, they can be represented without a timezone.
+ These are represented in javascript as Date objects whose `isFloating` property is true and
+ whose `toISOString` method will return a representation without a timezone.
+* Dates without times are now supported. Dates do not have timezones. Dates
+ are represented in javascript as a Date object whose `isDate` property is true and
+ whose `toISOString` method returns just the date.
+* Times without dates are now supported. Times do not have timezones. Times
+ are represented in javascript as a Date object whose `isTime` property is true and
+ whose `toISOString` method returns just the time.
+* Keys can now include dots to directly address deeper structures, so `a.b = 23` is
+ the equivalent of `a = {b = 23}` or ```[a]
+b = 23```. These can be used both as keys to regular tables and inline tables.
+* Integers can now be specified in binary, octal and hexadecimal by prefixing the
+ number with `0b`, `0o` and `0x` respectively. It is now illegal to left
+ pad a decimal value with zeros.
+
+Some parser details were also fixed:
+
+* Negative zero (`-0.0`) and positive zero (`0.0`) are distinct floating point values.
+* Negative integer zero (`-0`) is not distinguished from positive zero (`0`).
+
+# 1.7.1
+
+Another 18% speed boost on our overall benchmarks! This time it came from
+switching from string comparisons to integer by converting each character to
+its respective code point. This also necessitated rewriting the boolean
+parser to actually parse character-by-character as it should. End-of-stream
+is now marked with a numeric value outside of the Unicode range, rather than
+a Symbol, meaning that the parser's char property is now monomorphic.
+
+Bug fix, previously, `'abc''def'''` was accepted (as the value: `abcdef`).
+Now it will correctly raise an error.
+
+Spec tests now run against bombadil as well (it fails some, which is unsurprising
+given its incomplete state).
+
+# 1.7.0
+
+This release features an overall 15% speed boost on our benchmarks. This
+came from a few things:
+
+* Date parsing was rewritten to not use regexps, resulting in a huge speed increase.
+* Strings of all kinds and bare keywords now use tight loops to collect characters when this will help.
+* Regexps in general were mostly removed. This didn't result in a speed
+ change, but it did allow refactoring the parser to be a lot easier to
+ follow.
+* The internal state tracking now uses a class and is constructed with a
+ fixed set of properties, allowing v8's optimizer to be more effective.
+
+In the land of new features:
+
+* Errors in the syntax of your TOML will now have the `fromTOML` property
+ set to true. This is in addition to the `line`, `col` and `pos`
+ properties they already have.
+
+ The main use of this is to make it possible to distinguish between errors
+ in the TOML and errors in the parser code itself. This is of particular utility
+ when testing parse errors.
+
+# 1.6.0
+
+**FIXES**
+
+* TOML.stringify: Allow toJSON properties that aren't functions, to align with JSON.stringify's behavior.
+* TOML.stringify: Don't use ever render keys as literal strings.
+* TOML.stringify: Don't try to escape control characters in literal strings.
+
+**FEATURES**
+
+* New Export: TOML.stringify.value, for encoding a stand alone inline value as TOML would. This produces
+ a TOML fragment, not a complete valid document.
+
+# 1.5.6
+
+* String literals are NOT supported as key names.
+* Accessing a shallower table after accessing it more deeply is ok and no longer crashes, eg:
+ ```toml
+ [a.b]
+ [a]
+ ```
+* Unicode characters in the reserved range now crash.
+* Empty bare keys, eg `[.abc]` or `[]` now crash.
+* Multiline backslash trimming supports CRs.
+* Multiline post quote trimming supports CRs.
+* Strings may not contain bare control chars (0x00-0x1f), except for \n, \r and \t.
+
+# 1.5.5
+
+* Yet MORE README fixes. 🙃
+
+# 1.5.4
+
+* README fix
+
+# 1.5.3
+
+* Benchmarks!
+* More tests!
+* More complete LICENSE information (some dev files are from other, MIT
+ licensed, projects, this is now more explicitly documented.)
+
+# 1.5.2
+
+* parse: Arrays with mixed types now throw errors, per the spec.
+* parse: Fix a parser bug that would result in errors when trying to parse arrays of numbers or dates
+ that were not separated by a space from the closing ].
+* parse: Fix a bug in the error pretty printer that resulted in errors on
+ the first line not getting the pretty print treatment.
+* stringify: Fix long standing bug where an array of Numbers, some of which required
+ decimals, would be emitted in a way that parsers would treat as mixed
+ Integer and Float values. Now if any Numbers in an array must be
+ represented with a decimal then all will be emitted such that parsers will
+ understand them to be Float.
+
+# 1.5.1
+
+* README fix
+
+# 1.5.0
+
+* A brand new TOML parser, from scratch, that performs like `toml-j0.4`
+ without the crashes and with vastly better error messages.
+* 100% test coverage for both the new parser and the existing stringifier. Some subtle bugs squashed!
+
+# v1.4.2
+
+* Revert fallback due to its having issues with the same files. (New plan
+ will be to write my own.)
+
+# v1.4.1
+
+* Depend on both `toml` and `toml-j0.4` with fallback from the latter to the
+ former when the latter crashes.
+
+# v1.4.0
+
+* Ducktype dates to make them compatible with `moment` and other `Date` replacements.
+
+# v1.3.1
+
+* Update docs with new toml module.
+
+# v1.3.0
+
+* Switch from `toml` to `toml-j0.4`, which is between 20x and 200x faster.
+ (The larger the input, the faster it is compared to `toml`).
+
+# v1.2.0
+
+* Return null when passed in null as the top level object.
+* Detect and skip invalid dates and numbers
+
+# v1.1.0
+
+* toJSON transformations are now honored (for everything except Date objects, as JSON represents them as strings).
+* Undefined/null values no longer result in exceptions, they now just result in the associated key being elided.
+
+# v1.0.1
+
+* Initial release
diff --git a/node_modules/@iarna/toml/LICENSE b/node_modules/@iarna/toml/LICENSE
new file mode 100755
index 0000000..51bcf57
--- /dev/null
+++ b/node_modules/@iarna/toml/LICENSE
@@ -0,0 +1,14 @@
+Copyright (c) 2016, Rebecca Turner
+
+Permission to use, copy, modify, and/or distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
diff --git a/node_modules/@iarna/toml/README.md b/node_modules/@iarna/toml/README.md
new file mode 100755
index 0000000..1958324
--- /dev/null
+++ b/node_modules/@iarna/toml/README.md
@@ -0,0 +1,301 @@
+# @iarna/toml
+
+Better TOML parsing and stringifying all in that familiar JSON interface.
+
+[](https://coveralls.io/github/iarna/iarna-toml)
+
+# ** TOML 0.5.0 **
+
+### TOML Spec Support
+
+The most recent version as of 2018-07-26: [v0.5.0](https://github.com/mojombo/toml/blob/master/versions/en/toml-v0.5.0.md)
+
+### Example
+
+```js
+const TOML = require('@iarna/toml')
+const obj = TOML.parse(`[abc]
+foo = 123
+bar = [1,2,3]`)
+/* obj =
+{abc: {foo: 123, bar: [1,2,3]}}
+*/
+const str = TOML.stringify(obj)
+/* str =
+[abc]
+foo = 123
+bar = [ 1, 2, 3 ]
+*/
+```
+
+Visit the project github [for more examples](https://github.com/iarna/iarna-toml/tree/latest/examples)!
+
+
+## Why @iarna/toml
+
+* See [TOML-SPEC-SUPPORT](https://shared.by.re-becca.org/misc/TOML-SPEC-SUPPORT.html)
+ for a comparison of which TOML features are supported by the various
+ Node.js TOML parsers.
+* BigInt support on Node 10!
+* 100% test coverage.
+* Fast parsing. It's as much as 100 times
+ faster than `toml` and 3 times faster than `toml-j0.4`. However a recent
+ newcomer [`@ltd/j-toml`](https://www.npmjs.com/package/@ltd/j-toml) has
+ appeared with 0.5 support and astoundingly fast parsing speeds for large
+ text blocks. All I can say is you'll have to test your specific work loads
+ if you want to know which of @iarna/toml and @ltd/j-toml is faster for
+ you, as we currently excell in different areas.
+* Careful adherence to spec. Tests go beyond simple coverage.
+* Smallest parser bundle (if you use `@iarna/toml/parse-string`).
+* No deps.
+* Detailed and easy to read error messages‼
+
+```console
+> TOML.parse(src)
+Error: Unexpected character, expecting string, number, datetime, boolean, inline array or inline table at row 6, col 5, pos 87:
+5: "abc\"" = { abc=123,def="abc" }
+6> foo=sdkfj
+ ^
+7:
+```
+
+## TOML.parse(str) → Object [(example)](https://github.com/iarna/iarna-toml/blob/latest/examples/parse.js)
+
+Also available with: `require('@iarna/toml/parse-string')`
+
+Synchronously parse a TOML string and return an object.
+
+
+## TOML.stringify(obj) → String [(example)](https://github.com/iarna/iarna-toml/blob/latest/examples/stringify.js)
+
+Also available with: `require('@iarna/toml/stringify)`
+
+Serialize an object as TOML.
+
+## [your-object].toJSON
+
+If an object `TOML.stringify` is serializing has a `toJSON` method then it
+will call it to transform the object before serializing it. This matches
+the behavior of `JSON.stringify`.
+
+The one exception to this is that `toJSON` is not called for `Date` objects
+because `JSON` represents dates as strings and TOML can represent them natively.
+
+[`moment`](https://www.npmjs.com/package/moment) objects are treated the
+same as native `Date` objects, in this respect.
+
+## TOML.stringify.value(obj) -> String
+
+Also available with: `require('@iarna/toml/stringify').value`
+
+Serialize a value as TOML would. This is a fragment and not a complete
+valid TOML document.
+
+## Promises and Streaming
+
+The parser provides alternative async and streaming interfaces, for times
+that you're working with really absurdly big TOML files and don't want to
+tie-up the event loop while it parses.
+
+### TOML.parse.async(str[, opts]) → Promise(Object) [(example)](https://github.com/iarna/iarna-toml/blob/latest/examples/parse-async.js)
+
+Also available with: `require('@iarna/toml/parse-async')`
+
+`opts.blocksize` is the amount text to parser per pass through the event loop. Defaults to 40kb.
+
+Asynchronously parse a TOML string and return a promise of the resulting object.
+
+### TOML.parse.stream(readable) → Promise(Object) [(example)](https://github.com/iarna/iarna-toml/blob/latest/examples/parse-stream-readable.js)
+
+Also available with: `require('@iarna/toml/parse-stream')`
+
+Given a readable stream, parse it as it feeds us data. Return a promise of the resulting object.
+
+### readable.pipe(TOML.parse.stream()) → Transform [(example)](https://github.com/iarna/iarna-toml/blob/latest/examples/parse-stream-through.js)
+
+Also available with: `require('@iarna/toml/parse-stream')`
+
+Returns a transform stream in object mode. When it completes, emit the
+resulting object. Only one object will ever be emitted.
+
+## Lowlevel Interface [(example)](https://github.com/iarna/iarna-toml/blob/latest/examples/parse-lowlevel.js) [(example w/ parser debugging)](https://github.com/iarna/iarna-toml/blob/latest/examples/parse-lowlevel-debug.js)
+
+You construct a parser object, per TOML file you want to process:
+
+```js
+const TOMLParser = require('@iarna/toml/lib/toml-parser.js')
+const parser = new TOMLParser()
+```
+
+Then you call the `parse` method for each chunk as you read them, or in a
+single call:
+
+```js
+parser.parse(`hello = 'world'`)
+```
+
+And finally, you call the `finish` method to complete parsing and retrieve
+the resulting object.
+
+```js
+const data = parser.finish()
+```
+
+Both the `parse` method and `finish` method will throw if they find a
+problem with the string they were given. Error objects thrown from the
+parser have `pos`, `line` and `col` attributes. `TOML.parse` adds a visual
+summary of where in the source string there were issues using
+`parse-pretty-error` and you can too:
+
+```js
+const prettyError = require('./parse-pretty-error.js')
+const newErr = prettyError(err, sourceString)
+```
+
+## What's Different
+
+Version 2 of this module supports TOML 0.5.0. Other modules currently
+published to the npm registry support 0.4.0. 0.5.0 is mostly backwards
+compatible with 0.4.0, but if you have need, you can install @iarna/toml@1
+to get a version of this module that supports 0.4.0. Please see the
+[CHANGELOG](CHANGELOG.md#2.0.0) for details on exactly whats changed.
+
+## TOML we can't do
+
+* `-nan` is a valid TOML value and is converted into `NaN`. There is no way to
+ produce `-nan` when stringifying. Stringification will produce positive `nan`.
+* Detecting and erroring on invalid utf8 documents: This is because Node's
+ UTF8 processing converts invalid sequences into the placeholder character
+ and does not have facilities for reporting these as errors instead. We
+ _can_ detect the placeholder character, but it's valid to intentionally
+ include them in documents, so erroring on them is not great.
+* On versions of Node < 10, very large Integer values will lose precision.
+ On Node >=10, bigints are used.
+* Floating/local dates and times are still represented by JavaScript Date
+ objects, which don't actually support these concepts. The objects
+ returned have been modified so that you can determine what kind of thing
+ they are (with `isFloating`, `isDate`, `isTime` properties) and that
+ their ISO representation (via `toISOString`) is representative of their
+ TOML value. They will correctly round trip if you pass them to
+ `TOML.stringify`.
+* Binary, hexadecimal and octal values are converted to ordinary integers and
+ will be decimal if you stringify them.
+
+## Changes
+
+I write a by hand, honest-to-god,
+[CHANGELOG](https://github.com/iarna/iarna-toml/blob/latest/CHANGELOG.md)
+for this project. It's a description of what went into a release that you
+the consumer of the module could care about, not a list of git commits, so
+please check it out!
+
+## Benchmarks
+
+You can run them yourself with:
+
+```console
+$ npm run benchmark
+```
+
+The results below are from my desktop using Node 13.13.0. The library
+versions tested were `@iarna/toml@2.2.4`, `toml-j0.4@1.1.1`, `toml@3.0.0`,
+`@sgarciac/bombadil@2.3.0`, `@ltd/j-toml@0.5.107`, and `fast-toml@0.5.4`. The speed value is
+megabytes-per-second that the parser can process of that document type.
+Bigger is better. The percentage after average results is the margin of error.
+
+New here is fast-toml. fast-toml is very fast, for some datatypes, but it
+also is missing most error checking demanded by the spec. For 0.4, it is
+complete except for detail of multiline strings caught by the compliance
+tests. Its support for 0.5 is incomplete. Check out the
+[spec compliance](https://shared.by.re-becca.org/misc/TOML-SPEC-SUPPORT.html) doc
+for details.
+
+As this table is getting a little wide, with how npm and github display it,
+you can also view it seperately in the
+[BENCHMARK](https://shared.by.re-becca.org/misc/BENCHMARK.html) document.
+
+| | @iarna/toml | toml-j0.4 | toml | @sgarciac/bombadil | @ltd/j-toml | fast-toml |
+| - | :---------: | :-------: | :--: | :----------------: | :---------: | :-------: |
+| **Overall** | 28MB/sec 0.35% | 6.5MB/sec 0.25% | 0.2MB/sec 0.70% | - | 35MB/sec 0.23% | - |
+| **Spec Example: v0.4.0** | 26MB/sec 0.37% | 10MB/sec 0.27% | 1MB/sec 0.42% | 1.2MB/sec 0.95% | 28MB/sec 0.31% | - |
+| **Spec Example: Hard Unicode** | 64MB/sec 0.59% | 18MB/sec 0.12% | 2MB/sec 0.20% | 0.6MB/sec 0.53% | 68MB/sec 0.31% | 78MB/sec 0.28% |
+| **Types: Array, Inline** | 7.3MB/sec 0.60% | 4MB/sec 0.16% | 0.1MB/sec 0.91% | 1.3MB/sec 0.81% | 10MB/sec 0.35% | 9MB/sec 0.16% |
+| **Types: Array** | 6.8MB/sec 0.19% | 6.7MB/sec 0.15% | 0.2MB/sec 0.79% | 1.2MB/sec 0.93% | 8.8MB/sec 0.47% | 27MB/sec 0.21% |
+| **Types: Boolean,** | 21MB/sec 0.20% | 9.4MB/sec 0.17% | 0.2MB/sec 0.96% | 1.8MB/sec 0.70% | 16MB/sec 0.20% | 8.4MB/sec 0.22% |
+| **Types: Datetime** | 18MB/sec 0.14% | 11MB/sec 0.15% | 0.3MB/sec 0.85% | 1.6MB/sec 0.45% | 9.8MB/sec 0.48% | 6.5MB/sec 0.23% |
+| **Types: Float** | 8.8MB/sec 0.09% | 5.9MB/sec 0.14% | 0.2MB/sec 0.51% | 2.1MB/sec 0.82% | 14MB/sec 0.15% | 7.9MB/sec 0.14% |
+| **Types: Int** | 5.9MB/sec 0.11% | 4.5MB/sec 0.28% | 0.1MB/sec 0.78% | 1.5MB/sec 0.64% | 10MB/sec 0.14% | 8MB/sec 0.17% |
+| **Types: Literal String, 7 char** | 26MB/sec 0.29% | 8.5MB/sec 0.32% | 0.3MB/sec 0.84% | 2.3MB/sec 1.02% | 23MB/sec 0.15% | 13MB/sec 0.15% |
+| **Types: Literal String, 92 char** | 46MB/sec 0.19% | 11MB/sec 0.20% | 0.3MB/sec 0.56% | 12MB/sec 0.92% | 101MB/sec 0.17% | 75MB/sec 0.29% |
+| **Types: Literal String, Multiline, 1079 char** | 22MB/sec 0.42% | 6.7MB/sec 0.55% | 0.9MB/sec 0.78% | 44MB/sec 1.00% | 350MB/sec 0.16% | 636MB/sec 0.16% |
+| **Types: Basic String, 7 char** | 25MB/sec 0.15% | 7.3MB/sec 0.18% | 0.2MB/sec 0.96% | 2.2MB/sec 1.09% | 14MB/sec 0.16% | 12MB/sec 0.22% |
+| **Types: Basic String, 92 char** | 43MB/sec 0.30% | 7.2MB/sec 0.16% | 0.1MB/sec 4.04% | 12MB/sec 1.33% | 71MB/sec 0.19% | 70MB/sec 0.23% |
+| **Types: Basic String, 1079 char** | 24MB/sec 0.45% | 5.8MB/sec 0.17% | 0.1MB/sec 3.64% | 44MB/sec 1.05% | 93MB/sec 0.29% | 635MB/sec 0.28% |
+| **Types: Table, Inline** | 9.7MB/sec 0.10% | 5.5MB/sec 0.22% | 0.1MB/sec 0.87% | 1.4MB/sec 1.18% | 8.7MB/sec 0.60% | 8.7MB/sec 0.22% |
+| **Types: Table** | 7.1MB/sec 0.14% | 5.6MB/sec 0.42% | 0.1MB/sec 0.65% | 1.4MB/sec 1.11% | 7.4MB/sec 0.70% | 18MB/sec 0.20% |
+| **Scaling: Array, Inline, 1000 elements** | 40MB/sec 0.21% | 2.4MB/sec 0.19% | 0.1MB/sec 0.35% | 1.6MB/sec 1.02% | 17MB/sec 0.15% | 32MB/sec 0.16% |
+| **Scaling: Array, Nested, 1000 deep** | 2MB/sec 0.15% | 1.7MB/sec 0.26% | 0.3MB/sec 0.58% | - | 1.8MB/sec 0.74% | 13MB/sec 0.20% |
+| **Scaling: Literal String, 40kb** | 61MB/sec 0.18% | 10MB/sec 0.15% | 3MB/sec 0.84% | 12MB/sec 0.51% | 551MB/sec 0.44% | 19kMB/sec 0.19% |
+| **Scaling: Literal String, Multiline, 40kb** | 62MB/sec 0.16% | 5MB/sec 0.45% | 0.2MB/sec 1.70% | 11MB/sec 0.74% | 291MB/sec 0.24% | 21kMB/sec 0.22% |
+| **Scaling: Basic String, Multiline, 40kb** | 62MB/sec 0.18% | 5.8MB/sec 0.38% | 2.9MB/sec 0.86% | 11MB/sec 0.41% | 949MB/sec 0.44% | 26kMB/sec 0.16% |
+| **Scaling: Basic String, 40kb** | 59MB/sec 0.20% | 6.3MB/sec 0.17% | 0.2MB/sec 1.95% | 12MB/sec 0.44% | 508MB/sec 0.35% | 18kMB/sec 0.15% |
+| **Scaling: Table, Inline, 1000 elements** | 28MB/sec 0.12% | 8.2MB/sec 0.19% | 0.3MB/sec 0.89% | 2.3MB/sec 1.14% | 5.3MB/sec 0.24% | 13MB/sec 0.20% |
+| **Scaling: Table, Inline, Nested, 1000 deep** | 7.8MB/sec 0.28% | 5MB/sec 0.20% | 0.1MB/sec 0.84% | - | 3.2MB/sec 0.52% | 10MB/sec 0.23% |
+
+## Tests
+
+The test suite is maintained at 100% coverage: [](https://coveralls.io/github/iarna/iarna-toml)
+
+The spec was carefully hand converted into a series of test framework
+independent (and mostly language independent) assertions, as pairs of TOML
+and YAML files. You can find those files here:
+[spec-test](https://github.com/iarna/iarna-toml/blob/latest/test/spec-test/).
+A number of examples of invalid Unicode were also written, but are difficult
+to make use of in Node.js where Unicode errors are silently hidden. You can
+find those here: [spec-test-disabled](https://github.com/iarna/iarna-toml/blob/latest/test/spec-test-disabled/).
+
+Further tests were written to increase coverage to 100%, these may be more
+implementation specific, but they can be found in [coverage](https://github.com/iarna/iarna-toml/blob/latest/test/coverage.js) and
+[coverage-error](https://github.com/iarna/iarna-toml/blob/latest/test/coverage-error.js).
+
+I've also written some quality assurance style tests, which don't contribute
+to coverage but do cover scenarios that could easily be problematic for some
+implementations can be found in:
+[test/qa.js](https://github.com/iarna/iarna-toml/blob/latest/test/qa.js) and
+[test/qa-error.js](https://github.com/iarna/iarna-toml/blob/latest/test/qa-error.js).
+
+All of the official example files from the TOML spec are run through this
+parser and compared to the official YAML files when available. These files are from the TOML spec as of:
+[357a4ba6](https://github.com/toml-lang/toml/tree/357a4ba6782e48ff26e646780bab11c90ed0a7bc)
+and specifically are:
+
+* [github.com/toml-lang/toml/tree/357a4ba6/examples](https://github.com/toml-lang/toml/tree/357a4ba6782e48ff26e646780bab11c90ed0a7bc/examples)
+* [github.com/toml-lang/toml/tree/357a4ba6/tests](https://github.com/toml-lang/toml/tree/357a4ba6782e48ff26e646780bab11c90ed0a7bc/tests)
+
+The stringifier is tested by round-tripping these same files, asserting that
+`TOML.parse(sourcefile)` deepEqual
+`TOML.parse(TOML.stringify(TOML.parse(sourcefile))`. This is done in
+[test/roundtrip-examples.js](https://github.com/iarna/iarna-toml/blob/latest/test/round-tripping.js)
+There are also some tests written to complete coverage from stringification in:
+[test/stringify.js](https://github.com/iarna/iarna-toml/blob/latest/test/stringify.js)
+
+Tests for the async and streaming interfaces are in [test/async.js](https://github.com/iarna/iarna-toml/blob/latest/test/async.js) and [test/stream.js](https://github.com/iarna/iarna-toml/blob/latest/test/stream.js) respectively.
+
+Tests for the parsers debugging mode live in [test/devel.js](https://github.com/iarna/iarna-toml/blob/latest/test/devel.js).
+
+And finally, many more stringification tests were borrowed from [@othiym23](https://github.com/othiym23)'s
+[toml-stream](https://npmjs.com/package/toml-stream) module. They were fetched as of
+[b6f1e26b572d49742d49fa6a6d11524d003441fa](https://github.com/othiym23/toml-stream/tree/b6f1e26b572d49742d49fa6a6d11524d003441fa/test) and live in
+[test/toml-stream](https://github.com/iarna/iarna-toml/blob/latest/test/toml-stream/).
+
+## Improvements to make
+
+* In stringify:
+ * Any way to produce comments. As a JSON stand-in I'm not too worried
+ about this. That said, a document orientated fork is something I'd like
+ to look at eventually…
+ * Stringification could use some work on its error reporting. It reports
+ _what's_ wrong, but not where in your data structure it was.
+* Further optimize the parser:
+ * There are some debugging assertions left in the main parser, these should be moved to a subclass.
+ * Make the whole debugging parser thing work as a mixin instead of as a superclass.
diff --git a/node_modules/@iarna/toml/index.d.ts b/node_modules/@iarna/toml/index.d.ts
new file mode 100755
index 0000000..d37e2b6
--- /dev/null
+++ b/node_modules/@iarna/toml/index.d.ts
@@ -0,0 +1,57 @@
+import { Transform } from "stream";
+
+type JsonArray = boolean[] | number[] | string[] | JsonMap[] | Date[]
+type AnyJson = boolean | number | string | JsonMap | Date | JsonArray | JsonArray[]
+
+interface JsonMap {
+ [key: string]: AnyJson;
+}
+
+interface ParseOptions {
+ /**
+ * The amount text to parser per pass through the event loop. Defaults to 40kb (`40000`).
+ */
+ blocksize: number
+}
+
+interface FuncParse {
+ /**
+ * Synchronously parse a TOML string and return an object.
+ */
+ (toml: string): JsonMap
+
+ /**
+ * Asynchronously parse a TOML string and return a promise of the resulting object.
+ */
+ async (toml: string, options?: ParseOptions): Promise
+
+ /**
+ * Given a readable stream, parse it as it feeds us data. Return a promise of the resulting object.
+ */
+ stream (readable: NodeJS.ReadableStream): Promise
+ stream (): Transform
+}
+
+interface FuncStringify {
+ /**
+ * Serialize an object as TOML.
+ *
+ * If an object `TOML.stringify` is serializing has a `toJSON` method
+ * then it will call it to transform the object before serializing it.
+ * This matches the behavior of JSON.stringify.
+ *
+ * The one exception to this is that `toJSON` is not called for `Date` objects
+ * because JSON represents dates as strings and TOML can represent them natively.
+ *
+ * `moment` objects are treated the same as native `Date` objects, in this respect.
+ */
+ (obj: JsonMap): string
+
+ /**
+ * Serialize a value as TOML would. This is a fragment and not a complete valid TOML document.
+ */
+ value (any: AnyJson): string
+}
+
+export const parse: FuncParse
+export const stringify: FuncStringify
diff --git a/node_modules/@iarna/toml/package.json b/node_modules/@iarna/toml/package.json
new file mode 100755
index 0000000..71f9e82
--- /dev/null
+++ b/node_modules/@iarna/toml/package.json
@@ -0,0 +1,82 @@
+{
+ "name": "@iarna/toml",
+ "version": "2.2.5",
+ "main": "toml.js",
+ "scripts": {
+ "test": "tap -J --100 test/*.js test/toml-stream/*.js",
+ "benchmark": "node benchmark.js && node benchmark-per-file.js && node results2table.js",
+ "prerelease": "npm t",
+ "prepack": "rm -f *~",
+ "postpublish": "git push --follow-tags",
+ "pretest": "iarna-standard",
+ "update-coc": "weallbehave -o . && git add CODE_OF_CONDUCT.md && git commit -m 'docs(coc): updated CODE_OF_CONDUCT.md'",
+ "update-contrib": "weallcontribute -o . && git add CONTRIBUTING.md && git commit -m 'docs(contributing): updated CONTRIBUTING.md'",
+ "setup-burntsushi-toml-suite": "[ -d test/burntsushi-toml-test ] || (git clone https://github.com/BurntSushi/toml-test test/burntsushi-toml-test; rimraf test/burntsushi-toml-test/.git/hooks/*); cd test/burntsushi-toml-test; git pull",
+ "setup-iarna-toml-suite": "[ -d test/spec-test ] || (git clone https://github.com/iarna/toml-spec-tests -b 0.5.0 test/spec-test; rimraf test/spec-test/.git/hooks/*); cd test/spec-test; git pull",
+ "prepare": "npm run setup-burntsushi-toml-suite && npm run setup-iarna-toml-suite"
+ },
+ "keywords": [
+ "toml",
+ "toml-parser",
+ "toml-stringifier",
+ "parser",
+ "stringifer",
+ "emitter",
+ "ini",
+ "tomlify",
+ "encoder",
+ "decoder"
+ ],
+ "author": "Rebecca Turner (http://re-becca.org/)",
+ "license": "ISC",
+ "description": "Better TOML parsing and stringifying all in that familiar JSON interface.",
+ "dependencies": {},
+ "devDependencies": {
+ "@iarna/standard": "^2.0.2",
+ "@ltd/j-toml": "^0.5.107",
+ "@perl/qx": "^1.1.0",
+ "@sgarciac/bombadil": "^2.3.0",
+ "ansi": "^0.3.1",
+ "approximate-number": "^2.0.0",
+ "benchmark": "^2.1.4",
+ "fast-toml": "^0.5.4",
+ "funstream": "^4.2.0",
+ "glob": "^7.1.6",
+ "js-yaml": "^3.13.1",
+ "rimraf": "^3.0.2",
+ "tap": "^12.0.1",
+ "toml": "^3.0.0",
+ "toml-j0.4": "^1.1.1",
+ "weallbehave": "*",
+ "weallcontribute": "*"
+ },
+ "files": [
+ "toml.js",
+ "stringify.js",
+ "parse.js",
+ "parse-string.js",
+ "parse-stream.js",
+ "parse-async.js",
+ "parse-pretty-error.js",
+ "lib/parser.js",
+ "lib/parser-debug.js",
+ "lib/toml-parser.js",
+ "lib/create-datetime.js",
+ "lib/create-date.js",
+ "lib/create-datetime-float.js",
+ "lib/create-time.js",
+ "lib/format-num.js",
+ "index.d.ts"
+ ],
+ "directories": {
+ "test": "test"
+ },
+ "repository": {
+ "type": "git",
+ "url": "git+https://github.com/iarna/iarna-toml.git"
+ },
+ "bugs": {
+ "url": "https://github.com/iarna/iarna-toml/issues"
+ },
+ "homepage": "https://github.com/iarna/iarna-toml#readme"
+}
diff --git a/node_modules/@iarna/toml/parse-async.js b/node_modules/@iarna/toml/parse-async.js
new file mode 100755
index 0000000..e5ff090
--- /dev/null
+++ b/node_modules/@iarna/toml/parse-async.js
@@ -0,0 +1,30 @@
+'use strict'
+module.exports = parseAsync
+
+const TOMLParser = require('./lib/toml-parser.js')
+const prettyError = require('./parse-pretty-error.js')
+
+function parseAsync (str, opts) {
+ if (!opts) opts = {}
+ const index = 0
+ const blocksize = opts.blocksize || 40960
+ const parser = new TOMLParser()
+ return new Promise((resolve, reject) => {
+ setImmediate(parseAsyncNext, index, blocksize, resolve, reject)
+ })
+ function parseAsyncNext (index, blocksize, resolve, reject) {
+ if (index >= str.length) {
+ try {
+ return resolve(parser.finish())
+ } catch (err) {
+ return reject(prettyError(err, str))
+ }
+ }
+ try {
+ parser.parse(str.slice(index, index + blocksize))
+ setImmediate(parseAsyncNext, index + blocksize, blocksize, resolve, reject)
+ } catch (err) {
+ reject(prettyError(err, str))
+ }
+ }
+}
diff --git a/node_modules/@iarna/toml/parse-pretty-error.js b/node_modules/@iarna/toml/parse-pretty-error.js
new file mode 100755
index 0000000..fc0d31f
--- /dev/null
+++ b/node_modules/@iarna/toml/parse-pretty-error.js
@@ -0,0 +1,33 @@
+'use strict'
+module.exports = prettyError
+
+function prettyError (err, buf) {
+ /* istanbul ignore if */
+ if (err.pos == null || err.line == null) return err
+ let msg = err.message
+ msg += ` at row ${err.line + 1}, col ${err.col + 1}, pos ${err.pos}:\n`
+
+ /* istanbul ignore else */
+ if (buf && buf.split) {
+ const lines = buf.split(/\n/)
+ const lineNumWidth = String(Math.min(lines.length, err.line + 3)).length
+ let linePadding = ' '
+ while (linePadding.length < lineNumWidth) linePadding += ' '
+ for (let ii = Math.max(0, err.line - 1); ii < Math.min(lines.length, err.line + 2); ++ii) {
+ let lineNum = String(ii + 1)
+ if (lineNum.length < lineNumWidth) lineNum = ' ' + lineNum
+ if (err.line === ii) {
+ msg += lineNum + '> ' + lines[ii] + '\n'
+ msg += linePadding + ' '
+ for (let hh = 0; hh < err.col; ++hh) {
+ msg += ' '
+ }
+ msg += '^\n'
+ } else {
+ msg += lineNum + ': ' + lines[ii] + '\n'
+ }
+ }
+ }
+ err.message = msg + '\n'
+ return err
+}
diff --git a/node_modules/@iarna/toml/parse-stream.js b/node_modules/@iarna/toml/parse-stream.js
new file mode 100755
index 0000000..fb9a644
--- /dev/null
+++ b/node_modules/@iarna/toml/parse-stream.js
@@ -0,0 +1,80 @@
+'use strict'
+module.exports = parseStream
+
+const stream = require('stream')
+const TOMLParser = require('./lib/toml-parser.js')
+
+function parseStream (stm) {
+ if (stm) {
+ return parseReadable(stm)
+ } else {
+ return parseTransform(stm)
+ }
+}
+
+function parseReadable (stm) {
+ const parser = new TOMLParser()
+ stm.setEncoding('utf8')
+ return new Promise((resolve, reject) => {
+ let readable
+ let ended = false
+ let errored = false
+ function finish () {
+ ended = true
+ if (readable) return
+ try {
+ resolve(parser.finish())
+ } catch (err) {
+ reject(err)
+ }
+ }
+ function error (err) {
+ errored = true
+ reject(err)
+ }
+ stm.once('end', finish)
+ stm.once('error', error)
+ readNext()
+
+ function readNext () {
+ readable = true
+ let data
+ while ((data = stm.read()) !== null) {
+ try {
+ parser.parse(data)
+ } catch (err) {
+ return error(err)
+ }
+ }
+ readable = false
+ /* istanbul ignore if */
+ if (ended) return finish()
+ /* istanbul ignore if */
+ if (errored) return
+ stm.once('readable', readNext)
+ }
+ })
+}
+
+function parseTransform () {
+ const parser = new TOMLParser()
+ return new stream.Transform({
+ objectMode: true,
+ transform (chunk, encoding, cb) {
+ try {
+ parser.parse(chunk.toString(encoding))
+ } catch (err) {
+ this.emit('error', err)
+ }
+ cb()
+ },
+ flush (cb) {
+ try {
+ this.push(parser.finish())
+ } catch (err) {
+ this.emit('error', err)
+ }
+ cb()
+ }
+ })
+}
diff --git a/node_modules/@iarna/toml/parse-string.js b/node_modules/@iarna/toml/parse-string.js
new file mode 100755
index 0000000..84ff7d4
--- /dev/null
+++ b/node_modules/@iarna/toml/parse-string.js
@@ -0,0 +1,18 @@
+'use strict'
+module.exports = parseString
+
+const TOMLParser = require('./lib/toml-parser.js')
+const prettyError = require('./parse-pretty-error.js')
+
+function parseString (str) {
+ if (global.Buffer && global.Buffer.isBuffer(str)) {
+ str = str.toString('utf8')
+ }
+ const parser = new TOMLParser()
+ try {
+ parser.parse(str)
+ return parser.finish()
+ } catch (err) {
+ throw prettyError(err, str)
+ }
+}
diff --git a/node_modules/@iarna/toml/parse.js b/node_modules/@iarna/toml/parse.js
new file mode 100755
index 0000000..923b9d3
--- /dev/null
+++ b/node_modules/@iarna/toml/parse.js
@@ -0,0 +1,5 @@
+'use strict'
+module.exports = require('./parse-string.js')
+module.exports.async = require('./parse-async.js')
+module.exports.stream = require('./parse-stream.js')
+module.exports.prettyError = require('./parse-pretty-error.js')
diff --git a/node_modules/@iarna/toml/stringify.js b/node_modules/@iarna/toml/stringify.js
new file mode 100755
index 0000000..958caae
--- /dev/null
+++ b/node_modules/@iarna/toml/stringify.js
@@ -0,0 +1,296 @@
+'use strict'
+module.exports = stringify
+module.exports.value = stringifyInline
+
+function stringify (obj) {
+ if (obj === null) throw typeError('null')
+ if (obj === void (0)) throw typeError('undefined')
+ if (typeof obj !== 'object') throw typeError(typeof obj)
+
+ if (typeof obj.toJSON === 'function') obj = obj.toJSON()
+ if (obj == null) return null
+ const type = tomlType(obj)
+ if (type !== 'table') throw typeError(type)
+ return stringifyObject('', '', obj)
+}
+
+function typeError (type) {
+ return new Error('Can only stringify objects, not ' + type)
+}
+
+function arrayOneTypeError () {
+ return new Error("Array values can't have mixed types")
+}
+
+function getInlineKeys (obj) {
+ return Object.keys(obj).filter(key => isInline(obj[key]))
+}
+function getComplexKeys (obj) {
+ return Object.keys(obj).filter(key => !isInline(obj[key]))
+}
+
+function toJSON (obj) {
+ let nobj = Array.isArray(obj) ? [] : Object.prototype.hasOwnProperty.call(obj, '__proto__') ? {['__proto__']: undefined} : {}
+ for (let prop of Object.keys(obj)) {
+ if (obj[prop] && typeof obj[prop].toJSON === 'function' && !('toISOString' in obj[prop])) {
+ nobj[prop] = obj[prop].toJSON()
+ } else {
+ nobj[prop] = obj[prop]
+ }
+ }
+ return nobj
+}
+
+function stringifyObject (prefix, indent, obj) {
+ obj = toJSON(obj)
+ var inlineKeys
+ var complexKeys
+ inlineKeys = getInlineKeys(obj)
+ complexKeys = getComplexKeys(obj)
+ var result = []
+ var inlineIndent = indent || ''
+ inlineKeys.forEach(key => {
+ var type = tomlType(obj[key])
+ if (type !== 'undefined' && type !== 'null') {
+ result.push(inlineIndent + stringifyKey(key) + ' = ' + stringifyAnyInline(obj[key], true))
+ }
+ })
+ if (result.length > 0) result.push('')
+ var complexIndent = prefix && inlineKeys.length > 0 ? indent + ' ' : ''
+ complexKeys.forEach(key => {
+ result.push(stringifyComplex(prefix, complexIndent, key, obj[key]))
+ })
+ return result.join('\n')
+}
+
+function isInline (value) {
+ switch (tomlType(value)) {
+ case 'undefined':
+ case 'null':
+ case 'integer':
+ case 'nan':
+ case 'float':
+ case 'boolean':
+ case 'string':
+ case 'datetime':
+ return true
+ case 'array':
+ return value.length === 0 || tomlType(value[0]) !== 'table'
+ case 'table':
+ return Object.keys(value).length === 0
+ /* istanbul ignore next */
+ default:
+ return false
+ }
+}
+
+function tomlType (value) {
+ if (value === undefined) {
+ return 'undefined'
+ } else if (value === null) {
+ return 'null'
+ /* eslint-disable valid-typeof */
+ } else if (typeof value === 'bigint' || (Number.isInteger(value) && !Object.is(value, -0))) {
+ return 'integer'
+ } else if (typeof value === 'number') {
+ return 'float'
+ } else if (typeof value === 'boolean') {
+ return 'boolean'
+ } else if (typeof value === 'string') {
+ return 'string'
+ } else if ('toISOString' in value) {
+ return isNaN(value) ? 'undefined' : 'datetime'
+ } else if (Array.isArray(value)) {
+ return 'array'
+ } else {
+ return 'table'
+ }
+}
+
+function stringifyKey (key) {
+ var keyStr = String(key)
+ if (/^[-A-Za-z0-9_]+$/.test(keyStr)) {
+ return keyStr
+ } else {
+ return stringifyBasicString(keyStr)
+ }
+}
+
+function stringifyBasicString (str) {
+ return '"' + escapeString(str).replace(/"/g, '\\"') + '"'
+}
+
+function stringifyLiteralString (str) {
+ return "'" + str + "'"
+}
+
+function numpad (num, str) {
+ while (str.length < num) str = '0' + str
+ return str
+}
+
+function escapeString (str) {
+ return str.replace(/\\/g, '\\\\')
+ .replace(/[\b]/g, '\\b')
+ .replace(/\t/g, '\\t')
+ .replace(/\n/g, '\\n')
+ .replace(/\f/g, '\\f')
+ .replace(/\r/g, '\\r')
+ /* eslint-disable no-control-regex */
+ .replace(/([\u0000-\u001f\u007f])/, c => '\\u' + numpad(4, c.codePointAt(0).toString(16)))
+ /* eslint-enable no-control-regex */
+}
+
+function stringifyMultilineString (str) {
+ let escaped = str.split(/\n/).map(str => {
+ return escapeString(str).replace(/"(?="")/g, '\\"')
+ }).join('\n')
+ if (escaped.slice(-1) === '"') escaped += '\\\n'
+ return '"""\n' + escaped + '"""'
+}
+
+function stringifyAnyInline (value, multilineOk) {
+ let type = tomlType(value)
+ if (type === 'string') {
+ if (multilineOk && /\n/.test(value)) {
+ type = 'string-multiline'
+ } else if (!/[\b\t\n\f\r']/.test(value) && /"/.test(value)) {
+ type = 'string-literal'
+ }
+ }
+ return stringifyInline(value, type)
+}
+
+function stringifyInline (value, type) {
+ /* istanbul ignore if */
+ if (!type) type = tomlType(value)
+ switch (type) {
+ case 'string-multiline':
+ return stringifyMultilineString(value)
+ case 'string':
+ return stringifyBasicString(value)
+ case 'string-literal':
+ return stringifyLiteralString(value)
+ case 'integer':
+ return stringifyInteger(value)
+ case 'float':
+ return stringifyFloat(value)
+ case 'boolean':
+ return stringifyBoolean(value)
+ case 'datetime':
+ return stringifyDatetime(value)
+ case 'array':
+ return stringifyInlineArray(value.filter(_ => tomlType(_) !== 'null' && tomlType(_) !== 'undefined' && tomlType(_) !== 'nan'))
+ case 'table':
+ return stringifyInlineTable(value)
+ /* istanbul ignore next */
+ default:
+ throw typeError(type)
+ }
+}
+
+function stringifyInteger (value) {
+ /* eslint-disable security/detect-unsafe-regex */
+ return String(value).replace(/\B(?=(\d{3})+(?!\d))/g, '_')
+}
+
+function stringifyFloat (value) {
+ if (value === Infinity) {
+ return 'inf'
+ } else if (value === -Infinity) {
+ return '-inf'
+ } else if (Object.is(value, NaN)) {
+ return 'nan'
+ } else if (Object.is(value, -0)) {
+ return '-0.0'
+ }
+ var chunks = String(value).split('.')
+ var int = chunks[0]
+ var dec = chunks[1] || 0
+ return stringifyInteger(int) + '.' + dec
+}
+
+function stringifyBoolean (value) {
+ return String(value)
+}
+
+function stringifyDatetime (value) {
+ return value.toISOString()
+}
+
+function isNumber (type) {
+ return type === 'float' || type === 'integer'
+}
+function arrayType (values) {
+ var contentType = tomlType(values[0])
+ if (values.every(_ => tomlType(_) === contentType)) return contentType
+ // mixed integer/float, emit as floats
+ if (values.every(_ => isNumber(tomlType(_)))) return 'float'
+ return 'mixed'
+}
+function validateArray (values) {
+ const type = arrayType(values)
+ if (type === 'mixed') {
+ throw arrayOneTypeError()
+ }
+ return type
+}
+
+function stringifyInlineArray (values) {
+ values = toJSON(values)
+ const type = validateArray(values)
+ var result = '['
+ var stringified = values.map(_ => stringifyInline(_, type))
+ if (stringified.join(', ').length > 60 || /\n/.test(stringified)) {
+ result += '\n ' + stringified.join(',\n ') + '\n'
+ } else {
+ result += ' ' + stringified.join(', ') + (stringified.length > 0 ? ' ' : '')
+ }
+ return result + ']'
+}
+
+function stringifyInlineTable (value) {
+ value = toJSON(value)
+ var result = []
+ Object.keys(value).forEach(key => {
+ result.push(stringifyKey(key) + ' = ' + stringifyAnyInline(value[key], false))
+ })
+ return '{ ' + result.join(', ') + (result.length > 0 ? ' ' : '') + '}'
+}
+
+function stringifyComplex (prefix, indent, key, value) {
+ var valueType = tomlType(value)
+ /* istanbul ignore else */
+ if (valueType === 'array') {
+ return stringifyArrayOfTables(prefix, indent, key, value)
+ } else if (valueType === 'table') {
+ return stringifyComplexTable(prefix, indent, key, value)
+ } else {
+ throw typeError(valueType)
+ }
+}
+
+function stringifyArrayOfTables (prefix, indent, key, values) {
+ values = toJSON(values)
+ validateArray(values)
+ var firstValueType = tomlType(values[0])
+ /* istanbul ignore if */
+ if (firstValueType !== 'table') throw typeError(firstValueType)
+ var fullKey = prefix + stringifyKey(key)
+ var result = ''
+ values.forEach(table => {
+ if (result.length > 0) result += '\n'
+ result += indent + '[[' + fullKey + ']]\n'
+ result += stringifyObject(fullKey + '.', indent, table)
+ })
+ return result
+}
+
+function stringifyComplexTable (prefix, indent, key, value) {
+ var fullKey = prefix + stringifyKey(key)
+ var result = ''
+ if (getInlineKeys(value).length > 0) {
+ result += indent + '[' + fullKey + ']\n'
+ }
+ return result + stringifyObject(fullKey + '.', indent, value)
+}
diff --git a/node_modules/@iarna/toml/toml.js b/node_modules/@iarna/toml/toml.js
new file mode 100755
index 0000000..edca17c
--- /dev/null
+++ b/node_modules/@iarna/toml/toml.js
@@ -0,0 +1,3 @@
+'use strict'
+exports.parse = require('./parse.js')
+exports.stringify = require('./stringify.js')
diff --git a/node_modules/@shikijs/engine-oniguruma/LICENSE b/node_modules/@shikijs/engine-oniguruma/LICENSE
new file mode 100644
index 0000000..6a62718
--- /dev/null
+++ b/node_modules/@shikijs/engine-oniguruma/LICENSE
@@ -0,0 +1,22 @@
+MIT License
+
+Copyright (c) 2021 Pine Wu
+Copyright (c) 2023 Anthony Fu
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/@shikijs/engine-oniguruma/README.md b/node_modules/@shikijs/engine-oniguruma/README.md
new file mode 100644
index 0000000..7955dea
--- /dev/null
+++ b/node_modules/@shikijs/engine-oniguruma/README.md
@@ -0,0 +1,9 @@
+# @shikijs/engine-oniguruma
+
+Engine for Shiki using Oniguruma RegExp engine in WebAssembly.
+
+[Documentation](https://shiki.style/guide/regex-engines)
+
+## License
+
+MIT
diff --git a/node_modules/@shikijs/engine-oniguruma/package.json b/node_modules/@shikijs/engine-oniguruma/package.json
new file mode 100644
index 0000000..6189931
--- /dev/null
+++ b/node_modules/@shikijs/engine-oniguruma/package.json
@@ -0,0 +1,59 @@
+{
+ "name": "@shikijs/engine-oniguruma",
+ "type": "module",
+ "version": "1.29.2",
+ "description": "Engine for Shiki using Oniguruma RegExp engine in WebAssembly",
+ "author": "Anthony Fu ",
+ "license": "MIT",
+ "homepage": "https://github.com/shikijs/shiki#readme",
+ "repository": {
+ "type": "git",
+ "url": "git+https://github.com/shikijs/shiki.git",
+ "directory": "packages/engine-oniguruma"
+ },
+ "bugs": "https://github.com/shikijs/shiki/issues",
+ "keywords": [
+ "shiki",
+ "shiki-engine",
+ "oniguruma"
+ ],
+ "sideEffects": false,
+ "exports": {
+ ".": {
+ "types": "./dist/index.d.mts",
+ "default": "./dist/index.mjs"
+ },
+ "./wasm-inlined": {
+ "types": "./dist/wasm-inlined.d.mts",
+ "default": "./dist/wasm-inlined.mjs"
+ }
+ },
+ "main": "./dist/index.mjs",
+ "module": "./dist/index.mjs",
+ "types": "./dist/index.d.mts",
+ "typesVersions": {
+ "*": {
+ "wasm-inlined": [
+ "./dist/wasm-inlined.d.mts"
+ ],
+ "*": [
+ "./dist/*",
+ "./*"
+ ]
+ }
+ },
+ "files": [
+ "dist"
+ ],
+ "dependencies": {
+ "@shikijs/vscode-textmate": "^10.0.1",
+ "@shikijs/types": "1.29.2"
+ },
+ "devDependencies": {
+ "vscode-oniguruma": "1.7.0"
+ },
+ "scripts": {
+ "build": "rimraf dist && rollup -c",
+ "dev": "rollup -cw"
+ }
+}
\ No newline at end of file
diff --git a/node_modules/@shikijs/types/LICENSE b/node_modules/@shikijs/types/LICENSE
new file mode 100644
index 0000000..6a62718
--- /dev/null
+++ b/node_modules/@shikijs/types/LICENSE
@@ -0,0 +1,22 @@
+MIT License
+
+Copyright (c) 2021 Pine Wu
+Copyright (c) 2023 Anthony Fu
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/@shikijs/types/README.md b/node_modules/@shikijs/types/README.md
new file mode 100644
index 0000000..371182c
--- /dev/null
+++ b/node_modules/@shikijs/types/README.md
@@ -0,0 +1,7 @@
+# @shikijs/types
+
+Types for Shiki.
+
+## License
+
+MIT
diff --git a/node_modules/@shikijs/types/package.json b/node_modules/@shikijs/types/package.json
new file mode 100644
index 0000000..9c7fb28
--- /dev/null
+++ b/node_modules/@shikijs/types/package.json
@@ -0,0 +1,39 @@
+{
+ "name": "@shikijs/types",
+ "type": "module",
+ "version": "1.29.2",
+ "description": "Type definitions for Shiki",
+ "author": "Anthony Fu ",
+ "license": "MIT",
+ "homepage": "https://github.com/shikijs/shiki#readme",
+ "repository": {
+ "type": "git",
+ "url": "git+https://github.com/shikijs/shiki.git",
+ "directory": "packages/types"
+ },
+ "bugs": "https://github.com/shikijs/shiki/issues",
+ "keywords": [
+ "shiki"
+ ],
+ "sideEffects": false,
+ "exports": {
+ ".": {
+ "types": "./dist/index.d.mts",
+ "default": "./dist/index.mjs"
+ }
+ },
+ "main": "./dist/index.mjs",
+ "module": "./dist/index.mjs",
+ "types": "./dist/index.d.mts",
+ "files": [
+ "dist"
+ ],
+ "dependencies": {
+ "@shikijs/vscode-textmate": "^10.0.1",
+ "@types/hast": "^3.0.4"
+ },
+ "scripts": {
+ "build": "unbuild",
+ "dev": "unbuild --stub"
+ }
+}
\ No newline at end of file
diff --git a/node_modules/@shikijs/vscode-textmate/LICENSE.md b/node_modules/@shikijs/vscode-textmate/LICENSE.md
new file mode 100644
index 0000000..5ae193c
--- /dev/null
+++ b/node_modules/@shikijs/vscode-textmate/LICENSE.md
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) Microsoft Corporation
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/@shikijs/vscode-textmate/README.md b/node_modules/@shikijs/vscode-textmate/README.md
new file mode 100644
index 0000000..28d9f6b
--- /dev/null
+++ b/node_modules/@shikijs/vscode-textmate/README.md
@@ -0,0 +1,9 @@
+## Fork of [`microsoft/vscode-textmate`](https://github.com/microsoft/vscode-textmate)
+
+Changes make in this fork:
+
+- Change all `async` operations to `sync`; `onigLib` option now required to be resolved instead of a promise.
+- Use `tsup` to bundle the lib, ship as a single file ES module.
+- Remove debug flags and some other unnecessary exports.
+- Convert `EncodedTokenAttributes` from namespace to class, rename to `EncodedTokenMetadata`
+- Support RegExp literals in grammar object ([#3](https://github.com/shikijs/vscode-textmate/pull/3))
diff --git a/node_modules/@shikijs/vscode-textmate/package.json b/node_modules/@shikijs/vscode-textmate/package.json
new file mode 100644
index 0000000..9d35e9c
--- /dev/null
+++ b/node_modules/@shikijs/vscode-textmate/package.json
@@ -0,0 +1,46 @@
+{
+ "name": "@shikijs/vscode-textmate",
+ "version": "10.0.2",
+ "type": "module",
+ "description": "Shiki's fork of `vscode-textmate`",
+ "author": {
+ "name": "Microsoft Corporation"
+ },
+ "exports": {
+ ".": "./dist/index.js"
+ },
+ "main": "./dist/index.js",
+ "types": "./dist/index.d.ts",
+ "repository": {
+ "type": "git",
+ "url": "https://github.com/shikijs/vscode-textmate"
+ },
+ "files": [
+ "dist"
+ ],
+ "license": "MIT",
+ "bugs": {
+ "url": "https://github.com/shikijs/vscode-textmate/issues"
+ },
+ "devDependencies": {
+ "@types/mocha": "^9.1.1",
+ "@types/node": "^16.18.121",
+ "bumpp": "^9.9.0",
+ "mocha": "^9.2.2",
+ "tsup": "^8.3.5",
+ "tsx": "^4.19.2",
+ "typescript": "^5.7.2",
+ "vscode-oniguruma": "^1.7.0"
+ },
+ "scripts": {
+ "build": "tsup",
+ "test": "mocha --ui=tdd ./src/tests/all.test.ts",
+ "benchmark": "node benchmark/benchmark.js",
+ "inspect": "tsx src/tests/inspect.ts",
+ "typecheck": "tsc --noEmit",
+ "tmconvert": "node scripts/tmconvert.js",
+ "version": "npm run compile && npm run test",
+ "postversion": "git push && git push --tags",
+ "release": "bumpp && pnpm publish"
+ }
+}
\ No newline at end of file
diff --git a/node_modules/@types/hast/LICENSE b/node_modules/@types/hast/LICENSE
new file mode 100644
index 0000000..9e841e7
--- /dev/null
+++ b/node_modules/@types/hast/LICENSE
@@ -0,0 +1,21 @@
+ MIT License
+
+ Copyright (c) Microsoft Corporation.
+
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+
+ The above copyright notice and this permission notice shall be included in all
+ copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE
diff --git a/node_modules/@types/hast/README.md b/node_modules/@types/hast/README.md
new file mode 100644
index 0000000..b548c80
--- /dev/null
+++ b/node_modules/@types/hast/README.md
@@ -0,0 +1,15 @@
+# Installation
+> `npm install --save @types/hast`
+
+# Summary
+This package contains type definitions for hast (https://github.com/syntax-tree/hast).
+
+# Details
+Files were exported from https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/hast.
+
+### Additional Details
+ * Last updated: Tue, 30 Jan 2024 21:35:45 GMT
+ * Dependencies: [@types/unist](https://npmjs.com/package/@types/unist)
+
+# Credits
+These definitions were written by [lukeggchapman](https://github.com/lukeggchapman), [Junyoung Choi](https://github.com/rokt33r), [Christian Murphy](https://github.com/ChristianMurphy), and [Remco Haszing](https://github.com/remcohaszing).
diff --git a/node_modules/@types/hast/index.d.ts b/node_modules/@types/hast/index.d.ts
new file mode 100644
index 0000000..122b5b4
--- /dev/null
+++ b/node_modules/@types/hast/index.d.ts
@@ -0,0 +1,282 @@
+import type { Data as UnistData, Literal as UnistLiteral, Node as UnistNode, Parent as UnistParent } from "unist";
+
+// ## Interfaces
+
+/**
+ * Info associated with hast nodes by the ecosystem.
+ *
+ * This space is guaranteed to never be specified by unist or hast.
+ * But you can use it in utilities and plugins to store data.
+ *
+ * This type can be augmented to register custom data.
+ * For example:
+ *
+ * ```ts
+ * declare module 'hast' {
+ * interface Data {
+ * // `someNode.data.myId` is typed as `number | undefined`
+ * myId?: number | undefined
+ * }
+ * }
+ * ```
+ */
+export interface Data extends UnistData {}
+
+/**
+ * Info associated with an element.
+ */
+export interface Properties {
+ [PropertyName: string]: boolean | number | string | null | undefined | Array;
+}
+
+// ## Content maps
+
+/**
+ * Union of registered hast nodes that can occur in {@link Element}.
+ *
+ * To register mote custom hast nodes, add them to {@link ElementContentMap}.
+ * They will be automatically added here.
+ */
+export type ElementContent = ElementContentMap[keyof ElementContentMap];
+
+/**
+ * Registry of all hast nodes that can occur as children of {@link Element}.
+ *
+ * For a union of all {@link Element} children, see {@link ElementContent}.
+ */
+export interface ElementContentMap {
+ comment: Comment;
+ element: Element;
+ text: Text;
+}
+
+/**
+ * Union of registered hast nodes that can occur in {@link Root}.
+ *
+ * To register custom hast nodes, add them to {@link RootContentMap}.
+ * They will be automatically added here.
+ */
+export type RootContent = RootContentMap[keyof RootContentMap];
+
+/**
+ * Registry of all hast nodes that can occur as children of {@link Root}.
+ *
+ * > 👉 **Note**: {@link Root} does not need to be an entire document.
+ * > it can also be a fragment.
+ *
+ * For a union of all {@link Root} children, see {@link RootContent}.
+ */
+export interface RootContentMap {
+ comment: Comment;
+ doctype: Doctype;
+ element: Element;
+ text: Text;
+}
+
+// ### Special content types
+
+/**
+ * Union of registered hast nodes that can occur in {@link Root}.
+ *
+ * @deprecated Use {@link RootContent} instead.
+ */
+export type Content = RootContent;
+
+/**
+ * Union of registered hast literals.
+ *
+ * To register custom hast nodes, add them to {@link RootContentMap} and other
+ * places where relevant.
+ * They will be automatically added here.
+ */
+export type Literals = Extract;
+
+/**
+ * Union of registered hast nodes.
+ *
+ * To register custom hast nodes, add them to {@link RootContentMap} and other
+ * places where relevant.
+ * They will be automatically added here.
+ */
+export type Nodes = Root | RootContent;
+
+/**
+ * Union of registered hast parents.
+ *
+ * To register custom hast nodes, add them to {@link RootContentMap} and other
+ * places where relevant.
+ * They will be automatically added here.
+ */
+export type Parents = Extract;
+
+// ## Abstract nodes
+
+/**
+ * Abstract hast node.
+ *
+ * This interface is supposed to be extended.
+ * If you can use {@link Literal} or {@link Parent}, you should.
+ * But for example in HTML, a `Doctype` is neither literal nor parent, but
+ * still a node.
+ *
+ * To register custom hast nodes, add them to {@link RootContentMap} and other
+ * places where relevant (such as {@link ElementContentMap}).
+ *
+ * For a union of all registered hast nodes, see {@link Nodes}.
+ */
+export interface Node extends UnistNode {
+ /**
+ * Info from the ecosystem.
+ */
+ data?: Data | undefined;
+}
+
+/**
+ * Abstract hast node that contains the smallest possible value.
+ *
+ * This interface is supposed to be extended if you make custom hast nodes.
+ *
+ * For a union of all registered hast literals, see {@link Literals}.
+ */
+export interface Literal extends Node {
+ /**
+ * Plain-text value.
+ */
+ value: string;
+}
+
+/**
+ * Abstract hast node that contains other hast nodes (*children*).
+ *
+ * This interface is supposed to be extended if you make custom hast nodes.
+ *
+ * For a union of all registered hast parents, see {@link Parents}.
+ */
+export interface Parent extends Node {
+ /**
+ * List of children.
+ */
+ children: RootContent[];
+}
+
+// ## Concrete nodes
+
+/**
+ * HTML comment.
+ */
+export interface Comment extends Literal {
+ /**
+ * Node type of HTML comments in hast.
+ */
+ type: "comment";
+ /**
+ * Data associated with the comment.
+ */
+ data?: CommentData | undefined;
+}
+
+/**
+ * Info associated with hast comments by the ecosystem.
+ */
+export interface CommentData extends Data {}
+
+/**
+ * HTML document type.
+ */
+export interface Doctype extends UnistNode {
+ /**
+ * Node type of HTML document types in hast.
+ */
+ type: "doctype";
+ /**
+ * Data associated with the doctype.
+ */
+ data?: DoctypeData | undefined;
+}
+
+/**
+ * Info associated with hast doctypes by the ecosystem.
+ */
+export interface DoctypeData extends Data {}
+
+/**
+ * HTML element.
+ */
+export interface Element extends Parent {
+ /**
+ * Node type of elements.
+ */
+ type: "element";
+ /**
+ * Tag name (such as `'body'`) of the element.
+ */
+ tagName: string;
+ /**
+ * Info associated with the element.
+ */
+ properties: Properties;
+ /**
+ * Children of element.
+ */
+ children: ElementContent[];
+ /**
+ * When the `tagName` field is `'template'`, a `content` field can be
+ * present.
+ */
+ content?: Root | undefined;
+ /**
+ * Data associated with the element.
+ */
+ data?: ElementData | undefined;
+}
+
+/**
+ * Info associated with hast elements by the ecosystem.
+ */
+export interface ElementData extends Data {}
+
+/**
+ * Document fragment or a whole document.
+ *
+ * Should be used as the root of a tree and must not be used as a child.
+ *
+ * Can also be used as the value for the content field on a `'template'` element.
+ */
+export interface Root extends Parent {
+ /**
+ * Node type of hast root.
+ */
+ type: "root";
+ /**
+ * Children of root.
+ */
+ children: RootContent[];
+ /**
+ * Data associated with the hast root.
+ */
+ data?: RootData | undefined;
+}
+
+/**
+ * Info associated with hast root nodes by the ecosystem.
+ */
+export interface RootData extends Data {}
+
+/**
+ * HTML character data (plain text).
+ */
+export interface Text extends Literal {
+ /**
+ * Node type of HTML character data (plain text) in hast.
+ */
+ type: "text";
+ /**
+ * Data associated with the text.
+ */
+ data?: TextData | undefined;
+}
+
+/**
+ * Info associated with hast texts by the ecosystem.
+ */
+export interface TextData extends Data {}
diff --git a/node_modules/@types/hast/package.json b/node_modules/@types/hast/package.json
new file mode 100644
index 0000000..464e3f7
--- /dev/null
+++ b/node_modules/@types/hast/package.json
@@ -0,0 +1,42 @@
+{
+ "name": "@types/hast",
+ "version": "3.0.4",
+ "description": "TypeScript definitions for hast",
+ "homepage": "https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/hast",
+ "license": "MIT",
+ "contributors": [
+ {
+ "name": "lukeggchapman",
+ "githubUsername": "lukeggchapman",
+ "url": "https://github.com/lukeggchapman"
+ },
+ {
+ "name": "Junyoung Choi",
+ "githubUsername": "rokt33r",
+ "url": "https://github.com/rokt33r"
+ },
+ {
+ "name": "Christian Murphy",
+ "githubUsername": "ChristianMurphy",
+ "url": "https://github.com/ChristianMurphy"
+ },
+ {
+ "name": "Remco Haszing",
+ "githubUsername": "remcohaszing",
+ "url": "https://github.com/remcohaszing"
+ }
+ ],
+ "main": "",
+ "types": "index.d.ts",
+ "repository": {
+ "type": "git",
+ "url": "https://github.com/DefinitelyTyped/DefinitelyTyped.git",
+ "directory": "types/hast"
+ },
+ "scripts": {},
+ "dependencies": {
+ "@types/unist": "*"
+ },
+ "typesPublisherContentHash": "3f3f73826d79157c12087f5bb36195319c6f435b9e218fa7a8de88d1cc64d097",
+ "typeScriptVersion": "4.6"
+}
\ No newline at end of file
diff --git a/node_modules/@types/unist/LICENSE b/node_modules/@types/unist/LICENSE
new file mode 100644
index 0000000..9e841e7
--- /dev/null
+++ b/node_modules/@types/unist/LICENSE
@@ -0,0 +1,21 @@
+ MIT License
+
+ Copyright (c) Microsoft Corporation.
+
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+
+ The above copyright notice and this permission notice shall be included in all
+ copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ SOFTWARE
diff --git a/node_modules/@types/unist/README.md b/node_modules/@types/unist/README.md
new file mode 100644
index 0000000..3beb9b3
--- /dev/null
+++ b/node_modules/@types/unist/README.md
@@ -0,0 +1,15 @@
+# Installation
+> `npm install --save @types/unist`
+
+# Summary
+This package contains type definitions for unist (https://github.com/syntax-tree/unist).
+
+# Details
+Files were exported from https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/unist.
+
+### Additional Details
+ * Last updated: Thu, 15 Aug 2024 02:18:53 GMT
+ * Dependencies: none
+
+# Credits
+These definitions were written by [bizen241](https://github.com/bizen241), [Jun Lu](https://github.com/lujun2), [Hernan Rajchert](https://github.com/hrajchert), [Titus Wormer](https://github.com/wooorm), [Junyoung Choi](https://github.com/rokt33r), [Ben Moon](https://github.com/GuiltyDolphin), [JounQin](https://github.com/JounQin), and [Remco Haszing](https://github.com/remcohaszing).
diff --git a/node_modules/@types/unist/index.d.ts b/node_modules/@types/unist/index.d.ts
new file mode 100644
index 0000000..513ddee
--- /dev/null
+++ b/node_modules/@types/unist/index.d.ts
@@ -0,0 +1,119 @@
+// ## Interfaces
+
+/**
+ * Info associated with nodes by the ecosystem.
+ *
+ * This space is guaranteed to never be specified by unist or specifications
+ * implementing unist.
+ * But you can use it in utilities and plugins to store data.
+ *
+ * This type can be augmented to register custom data.
+ * For example:
+ *
+ * ```ts
+ * declare module 'unist' {
+ * interface Data {
+ * // `someNode.data.myId` is typed as `number | undefined`
+ * myId?: number | undefined
+ * }
+ * }
+ * ```
+ */
+export interface Data {}
+
+/**
+ * One place in a source file.
+ */
+export interface Point {
+ /**
+ * Line in a source file (1-indexed integer).
+ */
+ line: number;
+
+ /**
+ * Column in a source file (1-indexed integer).
+ */
+ column: number;
+ /**
+ * Character in a source file (0-indexed integer).
+ */
+ offset?: number | undefined;
+}
+
+/**
+ * Position of a node in a source document.
+ *
+ * A position is a range between two points.
+ */
+export interface Position {
+ /**
+ * Place of the first character of the parsed source region.
+ */
+ start: Point;
+
+ /**
+ * Place of the first character after the parsed source region.
+ */
+ end: Point;
+}
+
+// ## Abstract nodes
+
+/**
+ * Abstract unist node that contains the smallest possible value.
+ *
+ * This interface is supposed to be extended.
+ *
+ * For example, in HTML, a `text` node is a leaf that contains text.
+ */
+export interface Literal extends Node {
+ /**
+ * Plain value.
+ */
+ value: unknown;
+}
+
+/**
+ * Abstract unist node.
+ *
+ * The syntactic unit in unist syntax trees are called nodes.
+ *
+ * This interface is supposed to be extended.
+ * If you can use {@link Literal} or {@link Parent}, you should.
+ * But for example in markdown, a `thematicBreak` (`***`), is neither literal
+ * nor parent, but still a node.
+ */
+export interface Node {
+ /**
+ * Node type.
+ */
+ type: string;
+
+ /**
+ * Info from the ecosystem.
+ */
+ data?: Data | undefined;
+
+ /**
+ * Position of a node in a source document.
+ *
+ * Nodes that are generated (not in the original source document) must not
+ * have a position.
+ */
+ position?: Position | undefined;
+}
+
+/**
+ * Abstract unist node that contains other nodes (*children*).
+ *
+ * This interface is supposed to be extended.
+ *
+ * For example, in XML, an element is a parent of different things, such as
+ * comments, text, and further elements.
+ */
+export interface Parent extends Node {
+ /**
+ * List of children.
+ */
+ children: Node[];
+}
diff --git a/node_modules/@types/unist/package.json b/node_modules/@types/unist/package.json
new file mode 100644
index 0000000..d2092db
--- /dev/null
+++ b/node_modules/@types/unist/package.json
@@ -0,0 +1,60 @@
+{
+ "name": "@types/unist",
+ "version": "3.0.3",
+ "description": "TypeScript definitions for unist",
+ "homepage": "https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/unist",
+ "license": "MIT",
+ "contributors": [
+ {
+ "name": "bizen241",
+ "githubUsername": "bizen241",
+ "url": "https://github.com/bizen241"
+ },
+ {
+ "name": "Jun Lu",
+ "githubUsername": "lujun2",
+ "url": "https://github.com/lujun2"
+ },
+ {
+ "name": "Hernan Rajchert",
+ "githubUsername": "hrajchert",
+ "url": "https://github.com/hrajchert"
+ },
+ {
+ "name": "Titus Wormer",
+ "githubUsername": "wooorm",
+ "url": "https://github.com/wooorm"
+ },
+ {
+ "name": "Junyoung Choi",
+ "githubUsername": "rokt33r",
+ "url": "https://github.com/rokt33r"
+ },
+ {
+ "name": "Ben Moon",
+ "githubUsername": "GuiltyDolphin",
+ "url": "https://github.com/GuiltyDolphin"
+ },
+ {
+ "name": "JounQin",
+ "githubUsername": "JounQin",
+ "url": "https://github.com/JounQin"
+ },
+ {
+ "name": "Remco Haszing",
+ "githubUsername": "remcohaszing",
+ "url": "https://github.com/remcohaszing"
+ }
+ ],
+ "main": "",
+ "types": "index.d.ts",
+ "repository": {
+ "type": "git",
+ "url": "https://github.com/DefinitelyTyped/DefinitelyTyped.git",
+ "directory": "types/unist"
+ },
+ "scripts": {},
+ "dependencies": {},
+ "typesPublisherContentHash": "7f3d5ce8d56003f3583a5317f98d444bdc99910c7b486c6b10af4f38694e61fe",
+ "typeScriptVersion": "4.8"
+}
\ No newline at end of file
diff --git a/node_modules/argparse/CHANGELOG.md b/node_modules/argparse/CHANGELOG.md
new file mode 100644
index 0000000..dc39ed6
--- /dev/null
+++ b/node_modules/argparse/CHANGELOG.md
@@ -0,0 +1,216 @@
+# Changelog
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+
+## [2.0.1] - 2020-08-29
+### Fixed
+- Fix issue with `process.argv` when used with interpreters (`coffee`, `ts-node`, etc.), #150.
+
+
+## [2.0.0] - 2020-08-14
+### Changed
+- Full rewrite. Now port from python 3.9.0 & more precise following.
+ See [doc](./doc) for difference and migration info.
+- node.js 10+ required
+- Removed most of local docs in favour of original ones.
+
+
+## [1.0.10] - 2018-02-15
+### Fixed
+- Use .concat instead of + for arrays, #122.
+
+
+## [1.0.9] - 2016-09-29
+### Changed
+- Rerelease after 1.0.8 - deps cleanup.
+
+
+## [1.0.8] - 2016-09-29
+### Changed
+- Maintenance (deps bump, fix node 6.5+ tests, coverage report).
+
+
+## [1.0.7] - 2016-03-17
+### Changed
+- Teach `addArgument` to accept string arg names. #97, @tomxtobin.
+
+
+## [1.0.6] - 2016-02-06
+### Changed
+- Maintenance: moved to eslint & updated CS.
+
+
+## [1.0.5] - 2016-02-05
+### Changed
+- Removed lodash dependency to significantly reduce install size.
+ Thanks to @mourner.
+
+
+## [1.0.4] - 2016-01-17
+### Changed
+- Maintenance: lodash update to 4.0.0.
+
+
+## [1.0.3] - 2015-10-27
+### Fixed
+- Fix parse `=` in args: `--examplepath="C:\myfolder\env=x64"`. #84, @CatWithApple.
+
+
+## [1.0.2] - 2015-03-22
+### Changed
+- Relaxed lodash version dependency.
+
+
+## [1.0.1] - 2015-02-20
+### Changed
+- Changed dependencies to be compatible with ancient nodejs.
+
+
+## [1.0.0] - 2015-02-19
+### Changed
+- Maintenance release.
+- Replaced `underscore` with `lodash`.
+- Bumped version to 1.0.0 to better reflect semver meaning.
+- HISTORY.md -> CHANGELOG.md
+
+
+## [0.1.16] - 2013-12-01
+### Changed
+- Maintenance release. Updated dependencies and docs.
+
+
+## [0.1.15] - 2013-05-13
+### Fixed
+- Fixed #55, @trebor89
+
+
+## [0.1.14] - 2013-05-12
+### Fixed
+- Fixed #62, @maxtaco
+
+
+## [0.1.13] - 2013-04-08
+### Changed
+- Added `.npmignore` to reduce package size
+
+
+## [0.1.12] - 2013-02-10
+### Fixed
+- Fixed conflictHandler (#46), @hpaulj
+
+
+## [0.1.11] - 2013-02-07
+### Added
+- Added 70+ tests (ported from python), @hpaulj
+- Added conflictHandler, @applepicke
+- Added fromfilePrefixChar, @hpaulj
+
+### Fixed
+- Multiple bugfixes, @hpaulj
+
+
+## [0.1.10] - 2012-12-30
+### Added
+- Added [mutual exclusion](http://docs.python.org/dev/library/argparse.html#mutual-exclusion)
+ support, thanks to @hpaulj
+
+### Fixed
+- Fixed options check for `storeConst` & `appendConst` actions, thanks to @hpaulj
+
+
+## [0.1.9] - 2012-12-27
+### Fixed
+- Fixed option dest interferens with other options (issue #23), thanks to @hpaulj
+- Fixed default value behavior with `*` positionals, thanks to @hpaulj
+- Improve `getDefault()` behavior, thanks to @hpaulj
+- Improve negative argument parsing, thanks to @hpaulj
+
+
+## [0.1.8] - 2012-12-01
+### Fixed
+- Fixed parser parents (issue #19), thanks to @hpaulj
+- Fixed negative argument parse (issue #20), thanks to @hpaulj
+
+
+## [0.1.7] - 2012-10-14
+### Fixed
+- Fixed 'choices' argument parse (issue #16)
+- Fixed stderr output (issue #15)
+
+
+## [0.1.6] - 2012-09-09
+### Fixed
+- Fixed check for conflict of options (thanks to @tomxtobin)
+
+
+## [0.1.5] - 2012-09-03
+### Fixed
+- Fix parser #setDefaults method (thanks to @tomxtobin)
+
+
+## [0.1.4] - 2012-07-30
+### Fixed
+- Fixed pseudo-argument support (thanks to @CGamesPlay)
+- Fixed addHelp default (should be true), if not set (thanks to @benblank)
+
+
+## [0.1.3] - 2012-06-27
+### Fixed
+- Fixed formatter api name: Formatter -> HelpFormatter
+
+
+## [0.1.2] - 2012-05-29
+### Fixed
+- Removed excess whitespace in help
+- Fixed error reporting, when parcer with subcommands
+ called with empty arguments
+
+### Added
+- Added basic tests
+
+
+## [0.1.1] - 2012-05-23
+### Fixed
+- Fixed line wrapping in help formatter
+- Added better error reporting on invalid arguments
+
+
+## [0.1.0] - 2012-05-16
+### Added
+- First release.
+
+
+[2.0.1]: https://github.com/nodeca/argparse/compare/2.0.0...2.0.1
+[2.0.0]: https://github.com/nodeca/argparse/compare/1.0.10...2.0.0
+[1.0.10]: https://github.com/nodeca/argparse/compare/1.0.9...1.0.10
+[1.0.9]: https://github.com/nodeca/argparse/compare/1.0.8...1.0.9
+[1.0.8]: https://github.com/nodeca/argparse/compare/1.0.7...1.0.8
+[1.0.7]: https://github.com/nodeca/argparse/compare/1.0.6...1.0.7
+[1.0.6]: https://github.com/nodeca/argparse/compare/1.0.5...1.0.6
+[1.0.5]: https://github.com/nodeca/argparse/compare/1.0.4...1.0.5
+[1.0.4]: https://github.com/nodeca/argparse/compare/1.0.3...1.0.4
+[1.0.3]: https://github.com/nodeca/argparse/compare/1.0.2...1.0.3
+[1.0.2]: https://github.com/nodeca/argparse/compare/1.0.1...1.0.2
+[1.0.1]: https://github.com/nodeca/argparse/compare/1.0.0...1.0.1
+[1.0.0]: https://github.com/nodeca/argparse/compare/0.1.16...1.0.0
+[0.1.16]: https://github.com/nodeca/argparse/compare/0.1.15...0.1.16
+[0.1.15]: https://github.com/nodeca/argparse/compare/0.1.14...0.1.15
+[0.1.14]: https://github.com/nodeca/argparse/compare/0.1.13...0.1.14
+[0.1.13]: https://github.com/nodeca/argparse/compare/0.1.12...0.1.13
+[0.1.12]: https://github.com/nodeca/argparse/compare/0.1.11...0.1.12
+[0.1.11]: https://github.com/nodeca/argparse/compare/0.1.10...0.1.11
+[0.1.10]: https://github.com/nodeca/argparse/compare/0.1.9...0.1.10
+[0.1.9]: https://github.com/nodeca/argparse/compare/0.1.8...0.1.9
+[0.1.8]: https://github.com/nodeca/argparse/compare/0.1.7...0.1.8
+[0.1.7]: https://github.com/nodeca/argparse/compare/0.1.6...0.1.7
+[0.1.6]: https://github.com/nodeca/argparse/compare/0.1.5...0.1.6
+[0.1.5]: https://github.com/nodeca/argparse/compare/0.1.4...0.1.5
+[0.1.4]: https://github.com/nodeca/argparse/compare/0.1.3...0.1.4
+[0.1.3]: https://github.com/nodeca/argparse/compare/0.1.2...0.1.3
+[0.1.2]: https://github.com/nodeca/argparse/compare/0.1.1...0.1.2
+[0.1.1]: https://github.com/nodeca/argparse/compare/0.1.0...0.1.1
+[0.1.0]: https://github.com/nodeca/argparse/releases/tag/0.1.0
diff --git a/node_modules/argparse/LICENSE b/node_modules/argparse/LICENSE
new file mode 100644
index 0000000..66a3ac8
--- /dev/null
+++ b/node_modules/argparse/LICENSE
@@ -0,0 +1,254 @@
+A. HISTORY OF THE SOFTWARE
+==========================
+
+Python was created in the early 1990s by Guido van Rossum at Stichting
+Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
+as a successor of a language called ABC. Guido remains Python's
+principal author, although it includes many contributions from others.
+
+In 1995, Guido continued his work on Python at the Corporation for
+National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
+in Reston, Virginia where he released several versions of the
+software.
+
+In May 2000, Guido and the Python core development team moved to
+BeOpen.com to form the BeOpen PythonLabs team. In October of the same
+year, the PythonLabs team moved to Digital Creations, which became
+Zope Corporation. In 2001, the Python Software Foundation (PSF, see
+https://www.python.org/psf/) was formed, a non-profit organization
+created specifically to own Python-related Intellectual Property.
+Zope Corporation was a sponsoring member of the PSF.
+
+All Python releases are Open Source (see http://www.opensource.org for
+the Open Source Definition). Historically, most, but not all, Python
+releases have also been GPL-compatible; the table below summarizes
+the various releases.
+
+ Release Derived Year Owner GPL-
+ from compatible? (1)
+
+ 0.9.0 thru 1.2 1991-1995 CWI yes
+ 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
+ 1.6 1.5.2 2000 CNRI no
+ 2.0 1.6 2000 BeOpen.com no
+ 1.6.1 1.6 2001 CNRI yes (2)
+ 2.1 2.0+1.6.1 2001 PSF no
+ 2.0.1 2.0+1.6.1 2001 PSF yes
+ 2.1.1 2.1+2.0.1 2001 PSF yes
+ 2.1.2 2.1.1 2002 PSF yes
+ 2.1.3 2.1.2 2002 PSF yes
+ 2.2 and above 2.1.1 2001-now PSF yes
+
+Footnotes:
+
+(1) GPL-compatible doesn't mean that we're distributing Python under
+ the GPL. All Python licenses, unlike the GPL, let you distribute
+ a modified version without making your changes open source. The
+ GPL-compatible licenses make it possible to combine Python with
+ other software that is released under the GPL; the others don't.
+
+(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
+ because its license has a choice of law clause. According to
+ CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
+ is "not incompatible" with the GPL.
+
+Thanks to the many outside volunteers who have worked under Guido's
+direction to make these releases possible.
+
+
+B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
+===============================================================
+
+PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
+--------------------------------------------
+
+1. This LICENSE AGREEMENT is between the Python Software Foundation
+("PSF"), and the Individual or Organization ("Licensee") accessing and
+otherwise using this software ("Python") in source or binary form and
+its associated documentation.
+
+2. Subject to the terms and conditions of this License Agreement, PSF hereby
+grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
+analyze, test, perform and/or display publicly, prepare derivative works,
+distribute, and otherwise use Python alone or in any derivative version,
+provided, however, that PSF's License Agreement and PSF's notice of copyright,
+i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Python Software Foundation;
+All Rights Reserved" are retained in Python alone or in any derivative version
+prepared by Licensee.
+
+3. In the event Licensee prepares a derivative work that is based on
+or incorporates Python or any part thereof, and wants to make
+the derivative work available to others as provided herein, then
+Licensee hereby agrees to include in any such work a brief summary of
+the changes made to Python.
+
+4. PSF is making Python available to Licensee on an "AS IS"
+basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
+IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
+DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
+FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
+INFRINGE ANY THIRD PARTY RIGHTS.
+
+5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
+FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
+A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
+OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
+
+6. This License Agreement will automatically terminate upon a material
+breach of its terms and conditions.
+
+7. Nothing in this License Agreement shall be deemed to create any
+relationship of agency, partnership, or joint venture between PSF and
+Licensee. This License Agreement does not grant permission to use PSF
+trademarks or trade name in a trademark sense to endorse or promote
+products or services of Licensee, or any third party.
+
+8. By copying, installing or otherwise using Python, Licensee
+agrees to be bound by the terms and conditions of this License
+Agreement.
+
+
+BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
+-------------------------------------------
+
+BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
+
+1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
+office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
+Individual or Organization ("Licensee") accessing and otherwise using
+this software in source or binary form and its associated
+documentation ("the Software").
+
+2. Subject to the terms and conditions of this BeOpen Python License
+Agreement, BeOpen hereby grants Licensee a non-exclusive,
+royalty-free, world-wide license to reproduce, analyze, test, perform
+and/or display publicly, prepare derivative works, distribute, and
+otherwise use the Software alone or in any derivative version,
+provided, however, that the BeOpen Python License is retained in the
+Software, alone or in any derivative version prepared by Licensee.
+
+3. BeOpen is making the Software available to Licensee on an "AS IS"
+basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
+IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
+DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
+FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
+INFRINGE ANY THIRD PARTY RIGHTS.
+
+4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
+SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
+AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
+DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
+
+5. This License Agreement will automatically terminate upon a material
+breach of its terms and conditions.
+
+6. This License Agreement shall be governed by and interpreted in all
+respects by the law of the State of California, excluding conflict of
+law provisions. Nothing in this License Agreement shall be deemed to
+create any relationship of agency, partnership, or joint venture
+between BeOpen and Licensee. This License Agreement does not grant
+permission to use BeOpen trademarks or trade names in a trademark
+sense to endorse or promote products or services of Licensee, or any
+third party. As an exception, the "BeOpen Python" logos available at
+http://www.pythonlabs.com/logos.html may be used according to the
+permissions granted on that web page.
+
+7. By copying, installing or otherwise using the software, Licensee
+agrees to be bound by the terms and conditions of this License
+Agreement.
+
+
+CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
+---------------------------------------
+
+1. This LICENSE AGREEMENT is between the Corporation for National
+Research Initiatives, having an office at 1895 Preston White Drive,
+Reston, VA 20191 ("CNRI"), and the Individual or Organization
+("Licensee") accessing and otherwise using Python 1.6.1 software in
+source or binary form and its associated documentation.
+
+2. Subject to the terms and conditions of this License Agreement, CNRI
+hereby grants Licensee a nonexclusive, royalty-free, world-wide
+license to reproduce, analyze, test, perform and/or display publicly,
+prepare derivative works, distribute, and otherwise use Python 1.6.1
+alone or in any derivative version, provided, however, that CNRI's
+License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
+1995-2001 Corporation for National Research Initiatives; All Rights
+Reserved" are retained in Python 1.6.1 alone or in any derivative
+version prepared by Licensee. Alternately, in lieu of CNRI's License
+Agreement, Licensee may substitute the following text (omitting the
+quotes): "Python 1.6.1 is made available subject to the terms and
+conditions in CNRI's License Agreement. This Agreement together with
+Python 1.6.1 may be located on the Internet using the following
+unique, persistent identifier (known as a handle): 1895.22/1013. This
+Agreement may also be obtained from a proxy server on the Internet
+using the following URL: http://hdl.handle.net/1895.22/1013".
+
+3. In the event Licensee prepares a derivative work that is based on
+or incorporates Python 1.6.1 or any part thereof, and wants to make
+the derivative work available to others as provided herein, then
+Licensee hereby agrees to include in any such work a brief summary of
+the changes made to Python 1.6.1.
+
+4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
+basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
+IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
+DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
+FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
+INFRINGE ANY THIRD PARTY RIGHTS.
+
+5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
+1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
+A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
+OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
+
+6. This License Agreement will automatically terminate upon a material
+breach of its terms and conditions.
+
+7. This License Agreement shall be governed by the federal
+intellectual property law of the United States, including without
+limitation the federal copyright law, and, to the extent such
+U.S. federal law does not apply, by the law of the Commonwealth of
+Virginia, excluding Virginia's conflict of law provisions.
+Notwithstanding the foregoing, with regard to derivative works based
+on Python 1.6.1 that incorporate non-separable material that was
+previously distributed under the GNU General Public License (GPL), the
+law of the Commonwealth of Virginia shall govern this License
+Agreement only as to issues arising under or with respect to
+Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
+License Agreement shall be deemed to create any relationship of
+agency, partnership, or joint venture between CNRI and Licensee. This
+License Agreement does not grant permission to use CNRI trademarks or
+trade name in a trademark sense to endorse or promote products or
+services of Licensee, or any third party.
+
+8. By clicking on the "ACCEPT" button where indicated, or by copying,
+installing or otherwise using Python 1.6.1, Licensee agrees to be
+bound by the terms and conditions of this License Agreement.
+
+ ACCEPT
+
+
+CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
+--------------------------------------------------
+
+Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
+The Netherlands. All rights reserved.
+
+Permission to use, copy, modify, and distribute this software and its
+documentation for any purpose and without fee is hereby granted,
+provided that the above copyright notice appear in all copies and that
+both that copyright notice and this permission notice appear in
+supporting documentation, and that the name of Stichting Mathematisch
+Centrum or CWI not be used in advertising or publicity pertaining to
+distribution of the software without specific, written prior
+permission.
+
+STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
+THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
+FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
+FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
diff --git a/node_modules/argparse/README.md b/node_modules/argparse/README.md
new file mode 100644
index 0000000..550b5c9
--- /dev/null
+++ b/node_modules/argparse/README.md
@@ -0,0 +1,84 @@
+argparse
+========
+
+[](http://travis-ci.org/nodeca/argparse)
+[](https://www.npmjs.org/package/argparse)
+
+CLI arguments parser for node.js, with [sub-commands](https://docs.python.org/3.9/library/argparse.html#sub-commands) support. Port of python's [argparse](http://docs.python.org/dev/library/argparse.html) (version [3.9.0](https://github.com/python/cpython/blob/v3.9.0rc1/Lib/argparse.py)).
+
+**Difference with original.**
+
+- JS has no keyword arguments support.
+ - Pass options instead: `new ArgumentParser({ description: 'example', add_help: true })`.
+- JS has no python's types `int`, `float`, ...
+ - Use string-typed names: `.add_argument('-b', { type: 'int', help: 'help' })`.
+- `%r` format specifier uses `require('util').inspect()`.
+
+More details in [doc](./doc).
+
+
+Example
+-------
+
+`test.js` file:
+
+```javascript
+#!/usr/bin/env node
+'use strict';
+
+const { ArgumentParser } = require('argparse');
+const { version } = require('./package.json');
+
+const parser = new ArgumentParser({
+ description: 'Argparse example'
+});
+
+parser.add_argument('-v', '--version', { action: 'version', version });
+parser.add_argument('-f', '--foo', { help: 'foo bar' });
+parser.add_argument('-b', '--bar', { help: 'bar foo' });
+parser.add_argument('--baz', { help: 'baz bar' });
+
+console.dir(parser.parse_args());
+```
+
+Display help:
+
+```
+$ ./test.js -h
+usage: test.js [-h] [-v] [-f FOO] [-b BAR] [--baz BAZ]
+
+Argparse example
+
+optional arguments:
+ -h, --help show this help message and exit
+ -v, --version show program's version number and exit
+ -f FOO, --foo FOO foo bar
+ -b BAR, --bar BAR bar foo
+ --baz BAZ baz bar
+```
+
+Parse arguments:
+
+```
+$ ./test.js -f=3 --bar=4 --baz 5
+{ foo: '3', bar: '4', baz: '5' }
+```
+
+
+API docs
+--------
+
+Since this is a port with minimal divergence, there's no separate documentation.
+Use original one instead, with notes about difference.
+
+1. [Original doc](https://docs.python.org/3.9/library/argparse.html).
+2. [Original tutorial](https://docs.python.org/3.9/howto/argparse.html).
+3. [Difference with python](./doc).
+
+
+argparse for enterprise
+-----------------------
+
+Available as part of the Tidelift Subscription
+
+The maintainers of argparse and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-argparse?utm_source=npm-argparse&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
diff --git a/node_modules/argparse/argparse.js b/node_modules/argparse/argparse.js
new file mode 100644
index 0000000..2b8c8c6
--- /dev/null
+++ b/node_modules/argparse/argparse.js
@@ -0,0 +1,3707 @@
+// Port of python's argparse module, version 3.9.0:
+// https://github.com/python/cpython/blob/v3.9.0rc1/Lib/argparse.py
+
+'use strict'
+
+// Copyright (C) 2010-2020 Python Software Foundation.
+// Copyright (C) 2020 argparse.js authors
+
+/*
+ * Command-line parsing library
+ *
+ * This module is an optparse-inspired command-line parsing library that:
+ *
+ * - handles both optional and positional arguments
+ * - produces highly informative usage messages
+ * - supports parsers that dispatch to sub-parsers
+ *
+ * The following is a simple usage example that sums integers from the
+ * command-line and writes the result to a file::
+ *
+ * parser = argparse.ArgumentParser(
+ * description='sum the integers at the command line')
+ * parser.add_argument(
+ * 'integers', metavar='int', nargs='+', type=int,
+ * help='an integer to be summed')
+ * parser.add_argument(
+ * '--log', default=sys.stdout, type=argparse.FileType('w'),
+ * help='the file where the sum should be written')
+ * args = parser.parse_args()
+ * args.log.write('%s' % sum(args.integers))
+ * args.log.close()
+ *
+ * The module contains the following public classes:
+ *
+ * - ArgumentParser -- The main entry point for command-line parsing. As the
+ * example above shows, the add_argument() method is used to populate
+ * the parser with actions for optional and positional arguments. Then
+ * the parse_args() method is invoked to convert the args at the
+ * command-line into an object with attributes.
+ *
+ * - ArgumentError -- The exception raised by ArgumentParser objects when
+ * there are errors with the parser's actions. Errors raised while
+ * parsing the command-line are caught by ArgumentParser and emitted
+ * as command-line messages.
+ *
+ * - FileType -- A factory for defining types of files to be created. As the
+ * example above shows, instances of FileType are typically passed as
+ * the type= argument of add_argument() calls.
+ *
+ * - Action -- The base class for parser actions. Typically actions are
+ * selected by passing strings like 'store_true' or 'append_const' to
+ * the action= argument of add_argument(). However, for greater
+ * customization of ArgumentParser actions, subclasses of Action may
+ * be defined and passed as the action= argument.
+ *
+ * - HelpFormatter, RawDescriptionHelpFormatter, RawTextHelpFormatter,
+ * ArgumentDefaultsHelpFormatter -- Formatter classes which
+ * may be passed as the formatter_class= argument to the
+ * ArgumentParser constructor. HelpFormatter is the default,
+ * RawDescriptionHelpFormatter and RawTextHelpFormatter tell the parser
+ * not to change the formatting for help text, and
+ * ArgumentDefaultsHelpFormatter adds information about argument defaults
+ * to the help.
+ *
+ * All other classes in this module are considered implementation details.
+ * (Also note that HelpFormatter and RawDescriptionHelpFormatter are only
+ * considered public as object names -- the API of the formatter objects is
+ * still considered an implementation detail.)
+ */
+
+const SUPPRESS = '==SUPPRESS=='
+
+const OPTIONAL = '?'
+const ZERO_OR_MORE = '*'
+const ONE_OR_MORE = '+'
+const PARSER = 'A...'
+const REMAINDER = '...'
+const _UNRECOGNIZED_ARGS_ATTR = '_unrecognized_args'
+
+
+// ==================================
+// Utility functions used for porting
+// ==================================
+const assert = require('assert')
+const util = require('util')
+const fs = require('fs')
+const sub = require('./lib/sub')
+const path = require('path')
+const repr = util.inspect
+
+function get_argv() {
+ // omit first argument (which is assumed to be interpreter - `node`, `coffee`, `ts-node`, etc.)
+ return process.argv.slice(1)
+}
+
+function get_terminal_size() {
+ return {
+ columns: +process.env.COLUMNS || process.stdout.columns || 80
+ }
+}
+
+function hasattr(object, name) {
+ return Object.prototype.hasOwnProperty.call(object, name)
+}
+
+function getattr(object, name, value) {
+ return hasattr(object, name) ? object[name] : value
+}
+
+function setattr(object, name, value) {
+ object[name] = value
+}
+
+function setdefault(object, name, value) {
+ if (!hasattr(object, name)) object[name] = value
+ return object[name]
+}
+
+function delattr(object, name) {
+ delete object[name]
+}
+
+function range(from, to, step=1) {
+ // range(10) is equivalent to range(0, 10)
+ if (arguments.length === 1) [ to, from ] = [ from, 0 ]
+ if (typeof from !== 'number' || typeof to !== 'number' || typeof step !== 'number') {
+ throw new TypeError('argument cannot be interpreted as an integer')
+ }
+ if (step === 0) throw new TypeError('range() arg 3 must not be zero')
+
+ let result = []
+ if (step > 0) {
+ for (let i = from; i < to; i += step) result.push(i)
+ } else {
+ for (let i = from; i > to; i += step) result.push(i)
+ }
+ return result
+}
+
+function splitlines(str, keepends = false) {
+ let result
+ if (!keepends) {
+ result = str.split(/\r\n|[\n\r\v\f\x1c\x1d\x1e\x85\u2028\u2029]/)
+ } else {
+ result = []
+ let parts = str.split(/(\r\n|[\n\r\v\f\x1c\x1d\x1e\x85\u2028\u2029])/)
+ for (let i = 0; i < parts.length; i += 2) {
+ result.push(parts[i] + (i + 1 < parts.length ? parts[i + 1] : ''))
+ }
+ }
+ if (!result[result.length - 1]) result.pop()
+ return result
+}
+
+function _string_lstrip(string, prefix_chars) {
+ let idx = 0
+ while (idx < string.length && prefix_chars.includes(string[idx])) idx++
+ return idx ? string.slice(idx) : string
+}
+
+function _string_split(string, sep, maxsplit) {
+ let result = string.split(sep)
+ if (result.length > maxsplit) {
+ result = result.slice(0, maxsplit).concat([ result.slice(maxsplit).join(sep) ])
+ }
+ return result
+}
+
+function _array_equal(array1, array2) {
+ if (array1.length !== array2.length) return false
+ for (let i = 0; i < array1.length; i++) {
+ if (array1[i] !== array2[i]) return false
+ }
+ return true
+}
+
+function _array_remove(array, item) {
+ let idx = array.indexOf(item)
+ if (idx === -1) throw new TypeError(sub('%r not in list', item))
+ array.splice(idx, 1)
+}
+
+// normalize choices to array;
+// this isn't required in python because `in` and `map` operators work with anything,
+// but in js dealing with multiple types here is too clunky
+function _choices_to_array(choices) {
+ if (choices === undefined) {
+ return []
+ } else if (Array.isArray(choices)) {
+ return choices
+ } else if (choices !== null && typeof choices[Symbol.iterator] === 'function') {
+ return Array.from(choices)
+ } else if (typeof choices === 'object' && choices !== null) {
+ return Object.keys(choices)
+ } else {
+ throw new Error(sub('invalid choices value: %r', choices))
+ }
+}
+
+// decorator that allows a class to be called without new
+function _callable(cls) {
+ let result = { // object is needed for inferred class name
+ [cls.name]: function (...args) {
+ let this_class = new.target === result || !new.target
+ return Reflect.construct(cls, args, this_class ? cls : new.target)
+ }
+ }
+ result[cls.name].prototype = cls.prototype
+ // fix default tag for toString, e.g. [object Action] instead of [object Object]
+ cls.prototype[Symbol.toStringTag] = cls.name
+ return result[cls.name]
+}
+
+function _alias(object, from, to) {
+ try {
+ let name = object.constructor.name
+ Object.defineProperty(object, from, {
+ value: util.deprecate(object[to], sub('%s.%s() is renamed to %s.%s()',
+ name, from, name, to)),
+ enumerable: false
+ })
+ } catch {}
+}
+
+// decorator that allows snake_case class methods to be called with camelCase and vice versa
+function _camelcase_alias(_class) {
+ for (let name of Object.getOwnPropertyNames(_class.prototype)) {
+ let camelcase = name.replace(/\w_[a-z]/g, s => s[0] + s[2].toUpperCase())
+ if (camelcase !== name) _alias(_class.prototype, camelcase, name)
+ }
+ return _class
+}
+
+function _to_legacy_name(key) {
+ key = key.replace(/\w_[a-z]/g, s => s[0] + s[2].toUpperCase())
+ if (key === 'default') key = 'defaultValue'
+ if (key === 'const') key = 'constant'
+ return key
+}
+
+function _to_new_name(key) {
+ if (key === 'defaultValue') key = 'default'
+ if (key === 'constant') key = 'const'
+ key = key.replace(/[A-Z]/g, c => '_' + c.toLowerCase())
+ return key
+}
+
+// parse options
+let no_default = Symbol('no_default_value')
+function _parse_opts(args, descriptor) {
+ function get_name() {
+ let stack = new Error().stack.split('\n')
+ .map(x => x.match(/^ at (.*) \(.*\)$/))
+ .filter(Boolean)
+ .map(m => m[1])
+ .map(fn => fn.match(/[^ .]*$/)[0])
+
+ if (stack.length && stack[0] === get_name.name) stack.shift()
+ if (stack.length && stack[0] === _parse_opts.name) stack.shift()
+ return stack.length ? stack[0] : ''
+ }
+
+ args = Array.from(args)
+ let kwargs = {}
+ let result = []
+ let last_opt = args.length && args[args.length - 1]
+
+ if (typeof last_opt === 'object' && last_opt !== null && !Array.isArray(last_opt) &&
+ (!last_opt.constructor || last_opt.constructor.name === 'Object')) {
+ kwargs = Object.assign({}, args.pop())
+ }
+
+ // LEGACY (v1 compatibility): camelcase
+ let renames = []
+ for (let key of Object.keys(descriptor)) {
+ let old_name = _to_legacy_name(key)
+ if (old_name !== key && (old_name in kwargs)) {
+ if (key in kwargs) {
+ // default and defaultValue specified at the same time, happens often in old tests
+ //throw new TypeError(sub('%s() got multiple values for argument %r', get_name(), key))
+ } else {
+ kwargs[key] = kwargs[old_name]
+ }
+ renames.push([ old_name, key ])
+ delete kwargs[old_name]
+ }
+ }
+ if (renames.length) {
+ let name = get_name()
+ deprecate('camelcase_' + name, sub('%s(): following options are renamed: %s',
+ name, renames.map(([ a, b ]) => sub('%r -> %r', a, b))))
+ }
+ // end
+
+ let missing_positionals = []
+ let positional_count = args.length
+
+ for (let [ key, def ] of Object.entries(descriptor)) {
+ if (key[0] === '*') {
+ if (key.length > 0 && key[1] === '*') {
+ // LEGACY (v1 compatibility): camelcase
+ let renames = []
+ for (let key of Object.keys(kwargs)) {
+ let new_name = _to_new_name(key)
+ if (new_name !== key && (key in kwargs)) {
+ if (new_name in kwargs) {
+ // default and defaultValue specified at the same time, happens often in old tests
+ //throw new TypeError(sub('%s() got multiple values for argument %r', get_name(), new_name))
+ } else {
+ kwargs[new_name] = kwargs[key]
+ }
+ renames.push([ key, new_name ])
+ delete kwargs[key]
+ }
+ }
+ if (renames.length) {
+ let name = get_name()
+ deprecate('camelcase_' + name, sub('%s(): following options are renamed: %s',
+ name, renames.map(([ a, b ]) => sub('%r -> %r', a, b))))
+ }
+ // end
+ result.push(kwargs)
+ kwargs = {}
+ } else {
+ result.push(args)
+ args = []
+ }
+ } else if (key in kwargs && args.length > 0) {
+ throw new TypeError(sub('%s() got multiple values for argument %r', get_name(), key))
+ } else if (key in kwargs) {
+ result.push(kwargs[key])
+ delete kwargs[key]
+ } else if (args.length > 0) {
+ result.push(args.shift())
+ } else if (def !== no_default) {
+ result.push(def)
+ } else {
+ missing_positionals.push(key)
+ }
+ }
+
+ if (Object.keys(kwargs).length) {
+ throw new TypeError(sub('%s() got an unexpected keyword argument %r',
+ get_name(), Object.keys(kwargs)[0]))
+ }
+
+ if (args.length) {
+ let from = Object.entries(descriptor).filter(([ k, v ]) => k[0] !== '*' && v !== no_default).length
+ let to = Object.entries(descriptor).filter(([ k ]) => k[0] !== '*').length
+ throw new TypeError(sub('%s() takes %s positional argument%s but %s %s given',
+ get_name(),
+ from === to ? sub('from %s to %s', from, to) : to,
+ from === to && to === 1 ? '' : 's',
+ positional_count,
+ positional_count === 1 ? 'was' : 'were'))
+ }
+
+ if (missing_positionals.length) {
+ let strs = missing_positionals.map(repr)
+ if (strs.length > 1) strs[strs.length - 1] = 'and ' + strs[strs.length - 1]
+ let str_joined = strs.join(strs.length === 2 ? '' : ', ')
+ throw new TypeError(sub('%s() missing %i required positional argument%s: %s',
+ get_name(), strs.length, strs.length === 1 ? '' : 's', str_joined))
+ }
+
+ return result
+}
+
+let _deprecations = {}
+function deprecate(id, string) {
+ _deprecations[id] = _deprecations[id] || util.deprecate(() => {}, string)
+ _deprecations[id]()
+}
+
+
+// =============================
+// Utility functions and classes
+// =============================
+function _AttributeHolder(cls = Object) {
+ /*
+ * Abstract base class that provides __repr__.
+ *
+ * The __repr__ method returns a string in the format::
+ * ClassName(attr=name, attr=name, ...)
+ * The attributes are determined either by a class-level attribute,
+ * '_kwarg_names', or by inspecting the instance __dict__.
+ */
+
+ return class _AttributeHolder extends cls {
+ [util.inspect.custom]() {
+ let type_name = this.constructor.name
+ let arg_strings = []
+ let star_args = {}
+ for (let arg of this._get_args()) {
+ arg_strings.push(repr(arg))
+ }
+ for (let [ name, value ] of this._get_kwargs()) {
+ if (/^[a-z_][a-z0-9_$]*$/i.test(name)) {
+ arg_strings.push(sub('%s=%r', name, value))
+ } else {
+ star_args[name] = value
+ }
+ }
+ if (Object.keys(star_args).length) {
+ arg_strings.push(sub('**%s', repr(star_args)))
+ }
+ return sub('%s(%s)', type_name, arg_strings.join(', '))
+ }
+
+ toString() {
+ return this[util.inspect.custom]()
+ }
+
+ _get_kwargs() {
+ return Object.entries(this)
+ }
+
+ _get_args() {
+ return []
+ }
+ }
+}
+
+
+function _copy_items(items) {
+ if (items === undefined) {
+ return []
+ }
+ return items.slice(0)
+}
+
+
+// ===============
+// Formatting Help
+// ===============
+const HelpFormatter = _camelcase_alias(_callable(class HelpFormatter {
+ /*
+ * Formatter for generating usage messages and argument help strings.
+ *
+ * Only the name of this class is considered a public API. All the methods
+ * provided by the class are considered an implementation detail.
+ */
+
+ constructor() {
+ let [
+ prog,
+ indent_increment,
+ max_help_position,
+ width
+ ] = _parse_opts(arguments, {
+ prog: no_default,
+ indent_increment: 2,
+ max_help_position: 24,
+ width: undefined
+ })
+
+ // default setting for width
+ if (width === undefined) {
+ width = get_terminal_size().columns
+ width -= 2
+ }
+
+ this._prog = prog
+ this._indent_increment = indent_increment
+ this._max_help_position = Math.min(max_help_position,
+ Math.max(width - 20, indent_increment * 2))
+ this._width = width
+
+ this._current_indent = 0
+ this._level = 0
+ this._action_max_length = 0
+
+ this._root_section = this._Section(this, undefined)
+ this._current_section = this._root_section
+
+ this._whitespace_matcher = /[ \t\n\r\f\v]+/g // equivalent to python /\s+/ with ASCII flag
+ this._long_break_matcher = /\n\n\n+/g
+ }
+
+ // ===============================
+ // Section and indentation methods
+ // ===============================
+ _indent() {
+ this._current_indent += this._indent_increment
+ this._level += 1
+ }
+
+ _dedent() {
+ this._current_indent -= this._indent_increment
+ assert(this._current_indent >= 0, 'Indent decreased below 0.')
+ this._level -= 1
+ }
+
+ _add_item(func, args) {
+ this._current_section.items.push([ func, args ])
+ }
+
+ // ========================
+ // Message building methods
+ // ========================
+ start_section(heading) {
+ this._indent()
+ let section = this._Section(this, this._current_section, heading)
+ this._add_item(section.format_help.bind(section), [])
+ this._current_section = section
+ }
+
+ end_section() {
+ this._current_section = this._current_section.parent
+ this._dedent()
+ }
+
+ add_text(text) {
+ if (text !== SUPPRESS && text !== undefined) {
+ this._add_item(this._format_text.bind(this), [text])
+ }
+ }
+
+ add_usage(usage, actions, groups, prefix = undefined) {
+ if (usage !== SUPPRESS) {
+ let args = [ usage, actions, groups, prefix ]
+ this._add_item(this._format_usage.bind(this), args)
+ }
+ }
+
+ add_argument(action) {
+ if (action.help !== SUPPRESS) {
+
+ // find all invocations
+ let invocations = [this._format_action_invocation(action)]
+ for (let subaction of this._iter_indented_subactions(action)) {
+ invocations.push(this._format_action_invocation(subaction))
+ }
+
+ // update the maximum item length
+ let invocation_length = Math.max(...invocations.map(invocation => invocation.length))
+ let action_length = invocation_length + this._current_indent
+ this._action_max_length = Math.max(this._action_max_length,
+ action_length)
+
+ // add the item to the list
+ this._add_item(this._format_action.bind(this), [action])
+ }
+ }
+
+ add_arguments(actions) {
+ for (let action of actions) {
+ this.add_argument(action)
+ }
+ }
+
+ // =======================
+ // Help-formatting methods
+ // =======================
+ format_help() {
+ let help = this._root_section.format_help()
+ if (help) {
+ help = help.replace(this._long_break_matcher, '\n\n')
+ help = help.replace(/^\n+|\n+$/g, '') + '\n'
+ }
+ return help
+ }
+
+ _join_parts(part_strings) {
+ return part_strings.filter(part => part && part !== SUPPRESS).join('')
+ }
+
+ _format_usage(usage, actions, groups, prefix) {
+ if (prefix === undefined) {
+ prefix = 'usage: '
+ }
+
+ // if usage is specified, use that
+ if (usage !== undefined) {
+ usage = sub(usage, { prog: this._prog })
+
+ // if no optionals or positionals are available, usage is just prog
+ } else if (usage === undefined && !actions.length) {
+ usage = sub('%(prog)s', { prog: this._prog })
+
+ // if optionals and positionals are available, calculate usage
+ } else if (usage === undefined) {
+ let prog = sub('%(prog)s', { prog: this._prog })
+
+ // split optionals from positionals
+ let optionals = []
+ let positionals = []
+ for (let action of actions) {
+ if (action.option_strings.length) {
+ optionals.push(action)
+ } else {
+ positionals.push(action)
+ }
+ }
+
+ // build full usage string
+ let action_usage = this._format_actions_usage([].concat(optionals).concat(positionals), groups)
+ usage = [ prog, action_usage ].map(String).join(' ')
+
+ // wrap the usage parts if it's too long
+ let text_width = this._width - this._current_indent
+ if (prefix.length + usage.length > text_width) {
+
+ // break usage into wrappable parts
+ let part_regexp = /\(.*?\)+(?=\s|$)|\[.*?\]+(?=\s|$)|\S+/g
+ let opt_usage = this._format_actions_usage(optionals, groups)
+ let pos_usage = this._format_actions_usage(positionals, groups)
+ let opt_parts = opt_usage.match(part_regexp) || []
+ let pos_parts = pos_usage.match(part_regexp) || []
+ assert(opt_parts.join(' ') === opt_usage)
+ assert(pos_parts.join(' ') === pos_usage)
+
+ // helper for wrapping lines
+ let get_lines = (parts, indent, prefix = undefined) => {
+ let lines = []
+ let line = []
+ let line_len
+ if (prefix !== undefined) {
+ line_len = prefix.length - 1
+ } else {
+ line_len = indent.length - 1
+ }
+ for (let part of parts) {
+ if (line_len + 1 + part.length > text_width && line) {
+ lines.push(indent + line.join(' '))
+ line = []
+ line_len = indent.length - 1
+ }
+ line.push(part)
+ line_len += part.length + 1
+ }
+ if (line.length) {
+ lines.push(indent + line.join(' '))
+ }
+ if (prefix !== undefined) {
+ lines[0] = lines[0].slice(indent.length)
+ }
+ return lines
+ }
+
+ let lines
+
+ // if prog is short, follow it with optionals or positionals
+ if (prefix.length + prog.length <= 0.75 * text_width) {
+ let indent = ' '.repeat(prefix.length + prog.length + 1)
+ if (opt_parts.length) {
+ lines = get_lines([prog].concat(opt_parts), indent, prefix)
+ lines = lines.concat(get_lines(pos_parts, indent))
+ } else if (pos_parts.length) {
+ lines = get_lines([prog].concat(pos_parts), indent, prefix)
+ } else {
+ lines = [prog]
+ }
+
+ // if prog is long, put it on its own line
+ } else {
+ let indent = ' '.repeat(prefix.length)
+ let parts = [].concat(opt_parts).concat(pos_parts)
+ lines = get_lines(parts, indent)
+ if (lines.length > 1) {
+ lines = []
+ lines = lines.concat(get_lines(opt_parts, indent))
+ lines = lines.concat(get_lines(pos_parts, indent))
+ }
+ lines = [prog].concat(lines)
+ }
+
+ // join lines into usage
+ usage = lines.join('\n')
+ }
+ }
+
+ // prefix with 'usage:'
+ return sub('%s%s\n\n', prefix, usage)
+ }
+
+ _format_actions_usage(actions, groups) {
+ // find group indices and identify actions in groups
+ let group_actions = new Set()
+ let inserts = {}
+ for (let group of groups) {
+ let start = actions.indexOf(group._group_actions[0])
+ if (start === -1) {
+ continue
+ } else {
+ let end = start + group._group_actions.length
+ if (_array_equal(actions.slice(start, end), group._group_actions)) {
+ for (let action of group._group_actions) {
+ group_actions.add(action)
+ }
+ if (!group.required) {
+ if (start in inserts) {
+ inserts[start] += ' ['
+ } else {
+ inserts[start] = '['
+ }
+ if (end in inserts) {
+ inserts[end] += ']'
+ } else {
+ inserts[end] = ']'
+ }
+ } else {
+ if (start in inserts) {
+ inserts[start] += ' ('
+ } else {
+ inserts[start] = '('
+ }
+ if (end in inserts) {
+ inserts[end] += ')'
+ } else {
+ inserts[end] = ')'
+ }
+ }
+ for (let i of range(start + 1, end)) {
+ inserts[i] = '|'
+ }
+ }
+ }
+ }
+
+ // collect all actions format strings
+ let parts = []
+ for (let [ i, action ] of Object.entries(actions)) {
+
+ // suppressed arguments are marked with None
+ // remove | separators for suppressed arguments
+ if (action.help === SUPPRESS) {
+ parts.push(undefined)
+ if (inserts[+i] === '|') {
+ delete inserts[+i]
+ } else if (inserts[+i + 1] === '|') {
+ delete inserts[+i + 1]
+ }
+
+ // produce all arg strings
+ } else if (!action.option_strings.length) {
+ let default_value = this._get_default_metavar_for_positional(action)
+ let part = this._format_args(action, default_value)
+
+ // if it's in a group, strip the outer []
+ if (group_actions.has(action)) {
+ if (part[0] === '[' && part[part.length - 1] === ']') {
+ part = part.slice(1, -1)
+ }
+ }
+
+ // add the action string to the list
+ parts.push(part)
+
+ // produce the first way to invoke the option in brackets
+ } else {
+ let option_string = action.option_strings[0]
+ let part
+
+ // if the Optional doesn't take a value, format is:
+ // -s or --long
+ if (action.nargs === 0) {
+ part = action.format_usage()
+
+ // if the Optional takes a value, format is:
+ // -s ARGS or --long ARGS
+ } else {
+ let default_value = this._get_default_metavar_for_optional(action)
+ let args_string = this._format_args(action, default_value)
+ part = sub('%s %s', option_string, args_string)
+ }
+
+ // make it look optional if it's not required or in a group
+ if (!action.required && !group_actions.has(action)) {
+ part = sub('[%s]', part)
+ }
+
+ // add the action string to the list
+ parts.push(part)
+ }
+ }
+
+ // insert things at the necessary indices
+ for (let i of Object.keys(inserts).map(Number).sort((a, b) => b - a)) {
+ parts.splice(+i, 0, inserts[+i])
+ }
+
+ // join all the action items with spaces
+ let text = parts.filter(Boolean).join(' ')
+
+ // clean up separators for mutually exclusive groups
+ text = text.replace(/([\[(]) /g, '$1')
+ text = text.replace(/ ([\])])/g, '$1')
+ text = text.replace(/[\[(] *[\])]/g, '')
+ text = text.replace(/\(([^|]*)\)/g, '$1', text)
+ text = text.trim()
+
+ // return the text
+ return text
+ }
+
+ _format_text(text) {
+ if (text.includes('%(prog)')) {
+ text = sub(text, { prog: this._prog })
+ }
+ let text_width = Math.max(this._width - this._current_indent, 11)
+ let indent = ' '.repeat(this._current_indent)
+ return this._fill_text(text, text_width, indent) + '\n\n'
+ }
+
+ _format_action(action) {
+ // determine the required width and the entry label
+ let help_position = Math.min(this._action_max_length + 2,
+ this._max_help_position)
+ let help_width = Math.max(this._width - help_position, 11)
+ let action_width = help_position - this._current_indent - 2
+ let action_header = this._format_action_invocation(action)
+ let indent_first
+
+ // no help; start on same line and add a final newline
+ if (!action.help) {
+ let tup = [ this._current_indent, '', action_header ]
+ action_header = sub('%*s%s\n', ...tup)
+
+ // short action name; start on the same line and pad two spaces
+ } else if (action_header.length <= action_width) {
+ let tup = [ this._current_indent, '', action_width, action_header ]
+ action_header = sub('%*s%-*s ', ...tup)
+ indent_first = 0
+
+ // long action name; start on the next line
+ } else {
+ let tup = [ this._current_indent, '', action_header ]
+ action_header = sub('%*s%s\n', ...tup)
+ indent_first = help_position
+ }
+
+ // collect the pieces of the action help
+ let parts = [action_header]
+
+ // if there was help for the action, add lines of help text
+ if (action.help) {
+ let help_text = this._expand_help(action)
+ let help_lines = this._split_lines(help_text, help_width)
+ parts.push(sub('%*s%s\n', indent_first, '', help_lines[0]))
+ for (let line of help_lines.slice(1)) {
+ parts.push(sub('%*s%s\n', help_position, '', line))
+ }
+
+ // or add a newline if the description doesn't end with one
+ } else if (!action_header.endsWith('\n')) {
+ parts.push('\n')
+ }
+
+ // if there are any sub-actions, add their help as well
+ for (let subaction of this._iter_indented_subactions(action)) {
+ parts.push(this._format_action(subaction))
+ }
+
+ // return a single string
+ return this._join_parts(parts)
+ }
+
+ _format_action_invocation(action) {
+ if (!action.option_strings.length) {
+ let default_value = this._get_default_metavar_for_positional(action)
+ let metavar = this._metavar_formatter(action, default_value)(1)[0]
+ return metavar
+
+ } else {
+ let parts = []
+
+ // if the Optional doesn't take a value, format is:
+ // -s, --long
+ if (action.nargs === 0) {
+ parts = parts.concat(action.option_strings)
+
+ // if the Optional takes a value, format is:
+ // -s ARGS, --long ARGS
+ } else {
+ let default_value = this._get_default_metavar_for_optional(action)
+ let args_string = this._format_args(action, default_value)
+ for (let option_string of action.option_strings) {
+ parts.push(sub('%s %s', option_string, args_string))
+ }
+ }
+
+ return parts.join(', ')
+ }
+ }
+
+ _metavar_formatter(action, default_metavar) {
+ let result
+ if (action.metavar !== undefined) {
+ result = action.metavar
+ } else if (action.choices !== undefined) {
+ let choice_strs = _choices_to_array(action.choices).map(String)
+ result = sub('{%s}', choice_strs.join(','))
+ } else {
+ result = default_metavar
+ }
+
+ function format(tuple_size) {
+ if (Array.isArray(result)) {
+ return result
+ } else {
+ return Array(tuple_size).fill(result)
+ }
+ }
+ return format
+ }
+
+ _format_args(action, default_metavar) {
+ let get_metavar = this._metavar_formatter(action, default_metavar)
+ let result
+ if (action.nargs === undefined) {
+ result = sub('%s', ...get_metavar(1))
+ } else if (action.nargs === OPTIONAL) {
+ result = sub('[%s]', ...get_metavar(1))
+ } else if (action.nargs === ZERO_OR_MORE) {
+ let metavar = get_metavar(1)
+ if (metavar.length === 2) {
+ result = sub('[%s [%s ...]]', ...metavar)
+ } else {
+ result = sub('[%s ...]', ...metavar)
+ }
+ } else if (action.nargs === ONE_OR_MORE) {
+ result = sub('%s [%s ...]', ...get_metavar(2))
+ } else if (action.nargs === REMAINDER) {
+ result = '...'
+ } else if (action.nargs === PARSER) {
+ result = sub('%s ...', ...get_metavar(1))
+ } else if (action.nargs === SUPPRESS) {
+ result = ''
+ } else {
+ let formats
+ try {
+ formats = range(action.nargs).map(() => '%s')
+ } catch (err) {
+ throw new TypeError('invalid nargs value')
+ }
+ result = sub(formats.join(' '), ...get_metavar(action.nargs))
+ }
+ return result
+ }
+
+ _expand_help(action) {
+ let params = Object.assign({ prog: this._prog }, action)
+ for (let name of Object.keys(params)) {
+ if (params[name] === SUPPRESS) {
+ delete params[name]
+ }
+ }
+ for (let name of Object.keys(params)) {
+ if (params[name] && params[name].name) {
+ params[name] = params[name].name
+ }
+ }
+ if (params.choices !== undefined) {
+ let choices_str = _choices_to_array(params.choices).map(String).join(', ')
+ params.choices = choices_str
+ }
+ // LEGACY (v1 compatibility): camelcase
+ for (let key of Object.keys(params)) {
+ let old_name = _to_legacy_name(key)
+ if (old_name !== key) {
+ params[old_name] = params[key]
+ }
+ }
+ // end
+ return sub(this._get_help_string(action), params)
+ }
+
+ * _iter_indented_subactions(action) {
+ if (typeof action._get_subactions === 'function') {
+ this._indent()
+ yield* action._get_subactions()
+ this._dedent()
+ }
+ }
+
+ _split_lines(text, width) {
+ text = text.replace(this._whitespace_matcher, ' ').trim()
+ // The textwrap module is used only for formatting help.
+ // Delay its import for speeding up the common usage of argparse.
+ let textwrap = require('./lib/textwrap')
+ return textwrap.wrap(text, { width })
+ }
+
+ _fill_text(text, width, indent) {
+ text = text.replace(this._whitespace_matcher, ' ').trim()
+ let textwrap = require('./lib/textwrap')
+ return textwrap.fill(text, { width,
+ initial_indent: indent,
+ subsequent_indent: indent })
+ }
+
+ _get_help_string(action) {
+ return action.help
+ }
+
+ _get_default_metavar_for_optional(action) {
+ return action.dest.toUpperCase()
+ }
+
+ _get_default_metavar_for_positional(action) {
+ return action.dest
+ }
+}))
+
+HelpFormatter.prototype._Section = _callable(class _Section {
+
+ constructor(formatter, parent, heading = undefined) {
+ this.formatter = formatter
+ this.parent = parent
+ this.heading = heading
+ this.items = []
+ }
+
+ format_help() {
+ // format the indented section
+ if (this.parent !== undefined) {
+ this.formatter._indent()
+ }
+ let item_help = this.formatter._join_parts(this.items.map(([ func, args ]) => func.apply(null, args)))
+ if (this.parent !== undefined) {
+ this.formatter._dedent()
+ }
+
+ // return nothing if the section was empty
+ if (!item_help) {
+ return ''
+ }
+
+ // add the heading if the section was non-empty
+ let heading
+ if (this.heading !== SUPPRESS && this.heading !== undefined) {
+ let current_indent = this.formatter._current_indent
+ heading = sub('%*s%s:\n', current_indent, '', this.heading)
+ } else {
+ heading = ''
+ }
+
+ // join the section-initial newline, the heading and the help
+ return this.formatter._join_parts(['\n', heading, item_help, '\n'])
+ }
+})
+
+
+const RawDescriptionHelpFormatter = _camelcase_alias(_callable(class RawDescriptionHelpFormatter extends HelpFormatter {
+ /*
+ * Help message formatter which retains any formatting in descriptions.
+ *
+ * Only the name of this class is considered a public API. All the methods
+ * provided by the class are considered an implementation detail.
+ */
+
+ _fill_text(text, width, indent) {
+ return splitlines(text, true).map(line => indent + line).join('')
+ }
+}))
+
+
+const RawTextHelpFormatter = _camelcase_alias(_callable(class RawTextHelpFormatter extends RawDescriptionHelpFormatter {
+ /*
+ * Help message formatter which retains formatting of all help text.
+ *
+ * Only the name of this class is considered a public API. All the methods
+ * provided by the class are considered an implementation detail.
+ */
+
+ _split_lines(text/*, width*/) {
+ return splitlines(text)
+ }
+}))
+
+
+const ArgumentDefaultsHelpFormatter = _camelcase_alias(_callable(class ArgumentDefaultsHelpFormatter extends HelpFormatter {
+ /*
+ * Help message formatter which adds default values to argument help.
+ *
+ * Only the name of this class is considered a public API. All the methods
+ * provided by the class are considered an implementation detail.
+ */
+
+ _get_help_string(action) {
+ let help = action.help
+ // LEGACY (v1 compatibility): additional check for defaultValue needed
+ if (!action.help.includes('%(default)') && !action.help.includes('%(defaultValue)')) {
+ if (action.default !== SUPPRESS) {
+ let defaulting_nargs = [OPTIONAL, ZERO_OR_MORE]
+ if (action.option_strings.length || defaulting_nargs.includes(action.nargs)) {
+ help += ' (default: %(default)s)'
+ }
+ }
+ }
+ return help
+ }
+}))
+
+
+const MetavarTypeHelpFormatter = _camelcase_alias(_callable(class MetavarTypeHelpFormatter extends HelpFormatter {
+ /*
+ * Help message formatter which uses the argument 'type' as the default
+ * metavar value (instead of the argument 'dest')
+ *
+ * Only the name of this class is considered a public API. All the methods
+ * provided by the class are considered an implementation detail.
+ */
+
+ _get_default_metavar_for_optional(action) {
+ return typeof action.type === 'function' ? action.type.name : action.type
+ }
+
+ _get_default_metavar_for_positional(action) {
+ return typeof action.type === 'function' ? action.type.name : action.type
+ }
+}))
+
+
+// =====================
+// Options and Arguments
+// =====================
+function _get_action_name(argument) {
+ if (argument === undefined) {
+ return undefined
+ } else if (argument.option_strings.length) {
+ return argument.option_strings.join('/')
+ } else if (![ undefined, SUPPRESS ].includes(argument.metavar)) {
+ return argument.metavar
+ } else if (![ undefined, SUPPRESS ].includes(argument.dest)) {
+ return argument.dest
+ } else {
+ return undefined
+ }
+}
+
+
+const ArgumentError = _callable(class ArgumentError extends Error {
+ /*
+ * An error from creating or using an argument (optional or positional).
+ *
+ * The string value of this exception is the message, augmented with
+ * information about the argument that caused it.
+ */
+
+ constructor(argument, message) {
+ super()
+ this.name = 'ArgumentError'
+ this._argument_name = _get_action_name(argument)
+ this._message = message
+ this.message = this.str()
+ }
+
+ str() {
+ let format
+ if (this._argument_name === undefined) {
+ format = '%(message)s'
+ } else {
+ format = 'argument %(argument_name)s: %(message)s'
+ }
+ return sub(format, { message: this._message,
+ argument_name: this._argument_name })
+ }
+})
+
+
+const ArgumentTypeError = _callable(class ArgumentTypeError extends Error {
+ /*
+ * An error from trying to convert a command line string to a type.
+ */
+
+ constructor(message) {
+ super(message)
+ this.name = 'ArgumentTypeError'
+ }
+})
+
+
+// ==============
+// Action classes
+// ==============
+const Action = _camelcase_alias(_callable(class Action extends _AttributeHolder(Function) {
+ /*
+ * Information about how to convert command line strings to Python objects.
+ *
+ * Action objects are used by an ArgumentParser to represent the information
+ * needed to parse a single argument from one or more strings from the
+ * command line. The keyword arguments to the Action constructor are also
+ * all attributes of Action instances.
+ *
+ * Keyword Arguments:
+ *
+ * - option_strings -- A list of command-line option strings which
+ * should be associated with this action.
+ *
+ * - dest -- The name of the attribute to hold the created object(s)
+ *
+ * - nargs -- The number of command-line arguments that should be
+ * consumed. By default, one argument will be consumed and a single
+ * value will be produced. Other values include:
+ * - N (an integer) consumes N arguments (and produces a list)
+ * - '?' consumes zero or one arguments
+ * - '*' consumes zero or more arguments (and produces a list)
+ * - '+' consumes one or more arguments (and produces a list)
+ * Note that the difference between the default and nargs=1 is that
+ * with the default, a single value will be produced, while with
+ * nargs=1, a list containing a single value will be produced.
+ *
+ * - const -- The value to be produced if the option is specified and the
+ * option uses an action that takes no values.
+ *
+ * - default -- The value to be produced if the option is not specified.
+ *
+ * - type -- A callable that accepts a single string argument, and
+ * returns the converted value. The standard Python types str, int,
+ * float, and complex are useful examples of such callables. If None,
+ * str is used.
+ *
+ * - choices -- A container of values that should be allowed. If not None,
+ * after a command-line argument has been converted to the appropriate
+ * type, an exception will be raised if it is not a member of this
+ * collection.
+ *
+ * - required -- True if the action must always be specified at the
+ * command line. This is only meaningful for optional command-line
+ * arguments.
+ *
+ * - help -- The help string describing the argument.
+ *
+ * - metavar -- The name to be used for the option's argument with the
+ * help string. If None, the 'dest' value will be used as the name.
+ */
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ nargs,
+ const_value,
+ default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ nargs: undefined,
+ const: undefined,
+ default: undefined,
+ type: undefined,
+ choices: undefined,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ // when this class is called as a function, redirect it to .call() method of itself
+ super('return arguments.callee.call.apply(arguments.callee, arguments)')
+
+ this.option_strings = option_strings
+ this.dest = dest
+ this.nargs = nargs
+ this.const = const_value
+ this.default = default_value
+ this.type = type
+ this.choices = choices
+ this.required = required
+ this.help = help
+ this.metavar = metavar
+ }
+
+ _get_kwargs() {
+ let names = [
+ 'option_strings',
+ 'dest',
+ 'nargs',
+ 'const',
+ 'default',
+ 'type',
+ 'choices',
+ 'help',
+ 'metavar'
+ ]
+ return names.map(name => [ name, getattr(this, name) ])
+ }
+
+ format_usage() {
+ return this.option_strings[0]
+ }
+
+ call(/*parser, namespace, values, option_string = undefined*/) {
+ throw new Error('.call() not defined')
+ }
+}))
+
+
+const BooleanOptionalAction = _camelcase_alias(_callable(class BooleanOptionalAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ default: undefined,
+ type: undefined,
+ choices: undefined,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ let _option_strings = []
+ for (let option_string of option_strings) {
+ _option_strings.push(option_string)
+
+ if (option_string.startsWith('--')) {
+ option_string = '--no-' + option_string.slice(2)
+ _option_strings.push(option_string)
+ }
+ }
+
+ if (help !== undefined && default_value !== undefined) {
+ help += ` (default: ${default_value})`
+ }
+
+ super({
+ option_strings: _option_strings,
+ dest,
+ nargs: 0,
+ default: default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ })
+ }
+
+ call(parser, namespace, values, option_string = undefined) {
+ if (this.option_strings.includes(option_string)) {
+ setattr(namespace, this.dest, !option_string.startsWith('--no-'))
+ }
+ }
+
+ format_usage() {
+ return this.option_strings.join(' | ')
+ }
+}))
+
+
+const _StoreAction = _callable(class _StoreAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ nargs,
+ const_value,
+ default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ nargs: undefined,
+ const: undefined,
+ default: undefined,
+ type: undefined,
+ choices: undefined,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ if (nargs === 0) {
+ throw new TypeError('nargs for store actions must be != 0; if you ' +
+ 'have nothing to store, actions such as store ' +
+ 'true or store const may be more appropriate')
+ }
+ if (const_value !== undefined && nargs !== OPTIONAL) {
+ throw new TypeError(sub('nargs must be %r to supply const', OPTIONAL))
+ }
+ super({
+ option_strings,
+ dest,
+ nargs,
+ const: const_value,
+ default: default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ })
+ }
+
+ call(parser, namespace, values/*, option_string = undefined*/) {
+ setattr(namespace, this.dest, values)
+ }
+})
+
+
+const _StoreConstAction = _callable(class _StoreConstAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ const_value,
+ default_value,
+ required,
+ help
+ //, metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ const: no_default,
+ default: undefined,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ super({
+ option_strings,
+ dest,
+ nargs: 0,
+ const: const_value,
+ default: default_value,
+ required,
+ help
+ })
+ }
+
+ call(parser, namespace/*, values, option_string = undefined*/) {
+ setattr(namespace, this.dest, this.const)
+ }
+})
+
+
+const _StoreTrueAction = _callable(class _StoreTrueAction extends _StoreConstAction {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ default_value,
+ required,
+ help
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ default: false,
+ required: false,
+ help: undefined
+ })
+
+ super({
+ option_strings,
+ dest,
+ const: true,
+ default: default_value,
+ required,
+ help
+ })
+ }
+})
+
+
+const _StoreFalseAction = _callable(class _StoreFalseAction extends _StoreConstAction {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ default_value,
+ required,
+ help
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ default: true,
+ required: false,
+ help: undefined
+ })
+
+ super({
+ option_strings,
+ dest,
+ const: false,
+ default: default_value,
+ required,
+ help
+ })
+ }
+})
+
+
+const _AppendAction = _callable(class _AppendAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ nargs,
+ const_value,
+ default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ nargs: undefined,
+ const: undefined,
+ default: undefined,
+ type: undefined,
+ choices: undefined,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ if (nargs === 0) {
+ throw new TypeError('nargs for append actions must be != 0; if arg ' +
+ 'strings are not supplying the value to append, ' +
+ 'the append const action may be more appropriate')
+ }
+ if (const_value !== undefined && nargs !== OPTIONAL) {
+ throw new TypeError(sub('nargs must be %r to supply const', OPTIONAL))
+ }
+ super({
+ option_strings,
+ dest,
+ nargs,
+ const: const_value,
+ default: default_value,
+ type,
+ choices,
+ required,
+ help,
+ metavar
+ })
+ }
+
+ call(parser, namespace, values/*, option_string = undefined*/) {
+ let items = getattr(namespace, this.dest, undefined)
+ items = _copy_items(items)
+ items.push(values)
+ setattr(namespace, this.dest, items)
+ }
+})
+
+
+const _AppendConstAction = _callable(class _AppendConstAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ const_value,
+ default_value,
+ required,
+ help,
+ metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ const: no_default,
+ default: undefined,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ super({
+ option_strings,
+ dest,
+ nargs: 0,
+ const: const_value,
+ default: default_value,
+ required,
+ help,
+ metavar
+ })
+ }
+
+ call(parser, namespace/*, values, option_string = undefined*/) {
+ let items = getattr(namespace, this.dest, undefined)
+ items = _copy_items(items)
+ items.push(this.const)
+ setattr(namespace, this.dest, items)
+ }
+})
+
+
+const _CountAction = _callable(class _CountAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ default_value,
+ required,
+ help
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: no_default,
+ default: undefined,
+ required: false,
+ help: undefined
+ })
+
+ super({
+ option_strings,
+ dest,
+ nargs: 0,
+ default: default_value,
+ required,
+ help
+ })
+ }
+
+ call(parser, namespace/*, values, option_string = undefined*/) {
+ let count = getattr(namespace, this.dest, undefined)
+ if (count === undefined) {
+ count = 0
+ }
+ setattr(namespace, this.dest, count + 1)
+ }
+})
+
+
+const _HelpAction = _callable(class _HelpAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ dest,
+ default_value,
+ help
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ dest: SUPPRESS,
+ default: SUPPRESS,
+ help: undefined
+ })
+
+ super({
+ option_strings,
+ dest,
+ default: default_value,
+ nargs: 0,
+ help
+ })
+ }
+
+ call(parser/*, namespace, values, option_string = undefined*/) {
+ parser.print_help()
+ parser.exit()
+ }
+})
+
+
+const _VersionAction = _callable(class _VersionAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ version,
+ dest,
+ default_value,
+ help
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ version: undefined,
+ dest: SUPPRESS,
+ default: SUPPRESS,
+ help: "show program's version number and exit"
+ })
+
+ super({
+ option_strings,
+ dest,
+ default: default_value,
+ nargs: 0,
+ help
+ })
+ this.version = version
+ }
+
+ call(parser/*, namespace, values, option_string = undefined*/) {
+ let version = this.version
+ if (version === undefined) {
+ version = parser.version
+ }
+ let formatter = parser._get_formatter()
+ formatter.add_text(version)
+ parser._print_message(formatter.format_help(), process.stdout)
+ parser.exit()
+ }
+})
+
+
+const _SubParsersAction = _camelcase_alias(_callable(class _SubParsersAction extends Action {
+
+ constructor() {
+ let [
+ option_strings,
+ prog,
+ parser_class,
+ dest,
+ required,
+ help,
+ metavar
+ ] = _parse_opts(arguments, {
+ option_strings: no_default,
+ prog: no_default,
+ parser_class: no_default,
+ dest: SUPPRESS,
+ required: false,
+ help: undefined,
+ metavar: undefined
+ })
+
+ let name_parser_map = {}
+
+ super({
+ option_strings,
+ dest,
+ nargs: PARSER,
+ choices: name_parser_map,
+ required,
+ help,
+ metavar
+ })
+
+ this._prog_prefix = prog
+ this._parser_class = parser_class
+ this._name_parser_map = name_parser_map
+ this._choices_actions = []
+ }
+
+ add_parser() {
+ let [
+ name,
+ kwargs
+ ] = _parse_opts(arguments, {
+ name: no_default,
+ '**kwargs': no_default
+ })
+
+ // set prog from the existing prefix
+ if (kwargs.prog === undefined) {
+ kwargs.prog = sub('%s %s', this._prog_prefix, name)
+ }
+
+ let aliases = getattr(kwargs, 'aliases', [])
+ delete kwargs.aliases
+
+ // create a pseudo-action to hold the choice help
+ if ('help' in kwargs) {
+ let help = kwargs.help
+ delete kwargs.help
+ let choice_action = this._ChoicesPseudoAction(name, aliases, help)
+ this._choices_actions.push(choice_action)
+ }
+
+ // create the parser and add it to the map
+ let parser = new this._parser_class(kwargs)
+ this._name_parser_map[name] = parser
+
+ // make parser available under aliases also
+ for (let alias of aliases) {
+ this._name_parser_map[alias] = parser
+ }
+
+ return parser
+ }
+
+ _get_subactions() {
+ return this._choices_actions
+ }
+
+ call(parser, namespace, values/*, option_string = undefined*/) {
+ let parser_name = values[0]
+ let arg_strings = values.slice(1)
+
+ // set the parser name if requested
+ if (this.dest !== SUPPRESS) {
+ setattr(namespace, this.dest, parser_name)
+ }
+
+ // select the parser
+ if (hasattr(this._name_parser_map, parser_name)) {
+ parser = this._name_parser_map[parser_name]
+ } else {
+ let args = {parser_name,
+ choices: this._name_parser_map.join(', ')}
+ let msg = sub('unknown parser %(parser_name)r (choices: %(choices)s)', args)
+ throw new ArgumentError(this, msg)
+ }
+
+ // parse all the remaining options into the namespace
+ // store any unrecognized options on the object, so that the top
+ // level parser can decide what to do with them
+
+ // In case this subparser defines new defaults, we parse them
+ // in a new namespace object and then update the original
+ // namespace for the relevant parts.
+ let subnamespace
+ [ subnamespace, arg_strings ] = parser.parse_known_args(arg_strings, undefined)
+ for (let [ key, value ] of Object.entries(subnamespace)) {
+ setattr(namespace, key, value)
+ }
+
+ if (arg_strings.length) {
+ setdefault(namespace, _UNRECOGNIZED_ARGS_ATTR, [])
+ getattr(namespace, _UNRECOGNIZED_ARGS_ATTR).push(...arg_strings)
+ }
+ }
+}))
+
+
+_SubParsersAction.prototype._ChoicesPseudoAction = _callable(class _ChoicesPseudoAction extends Action {
+ constructor(name, aliases, help) {
+ let metavar = name, dest = name
+ if (aliases.length) {
+ metavar += sub(' (%s)', aliases.join(', '))
+ }
+ super({ option_strings: [], dest, help, metavar })
+ }
+})
+
+
+const _ExtendAction = _callable(class _ExtendAction extends _AppendAction {
+ call(parser, namespace, values/*, option_string = undefined*/) {
+ let items = getattr(namespace, this.dest, undefined)
+ items = _copy_items(items)
+ items = items.concat(values)
+ setattr(namespace, this.dest, items)
+ }
+})
+
+
+// ==============
+// Type classes
+// ==============
+const FileType = _callable(class FileType extends Function {
+ /*
+ * Factory for creating file object types
+ *
+ * Instances of FileType are typically passed as type= arguments to the
+ * ArgumentParser add_argument() method.
+ *
+ * Keyword Arguments:
+ * - mode -- A string indicating how the file is to be opened. Accepts the
+ * same values as the builtin open() function.
+ * - bufsize -- The file's desired buffer size. Accepts the same values as
+ * the builtin open() function.
+ * - encoding -- The file's encoding. Accepts the same values as the
+ * builtin open() function.
+ * - errors -- A string indicating how encoding and decoding errors are to
+ * be handled. Accepts the same value as the builtin open() function.
+ */
+
+ constructor() {
+ let [
+ flags,
+ encoding,
+ mode,
+ autoClose,
+ emitClose,
+ start,
+ end,
+ highWaterMark,
+ fs
+ ] = _parse_opts(arguments, {
+ flags: 'r',
+ encoding: undefined,
+ mode: undefined, // 0o666
+ autoClose: undefined, // true
+ emitClose: undefined, // false
+ start: undefined, // 0
+ end: undefined, // Infinity
+ highWaterMark: undefined, // 64 * 1024
+ fs: undefined
+ })
+
+ // when this class is called as a function, redirect it to .call() method of itself
+ super('return arguments.callee.call.apply(arguments.callee, arguments)')
+
+ Object.defineProperty(this, 'name', {
+ get() {
+ return sub('FileType(%r)', flags)
+ }
+ })
+ this._flags = flags
+ this._options = {}
+ if (encoding !== undefined) this._options.encoding = encoding
+ if (mode !== undefined) this._options.mode = mode
+ if (autoClose !== undefined) this._options.autoClose = autoClose
+ if (emitClose !== undefined) this._options.emitClose = emitClose
+ if (start !== undefined) this._options.start = start
+ if (end !== undefined) this._options.end = end
+ if (highWaterMark !== undefined) this._options.highWaterMark = highWaterMark
+ if (fs !== undefined) this._options.fs = fs
+ }
+
+ call(string) {
+ // the special argument "-" means sys.std{in,out}
+ if (string === '-') {
+ if (this._flags.includes('r')) {
+ return process.stdin
+ } else if (this._flags.includes('w')) {
+ return process.stdout
+ } else {
+ let msg = sub('argument "-" with mode %r', this._flags)
+ throw new TypeError(msg)
+ }
+ }
+
+ // all other arguments are used as file names
+ let fd
+ try {
+ fd = fs.openSync(string, this._flags, this._options.mode)
+ } catch (e) {
+ let args = { filename: string, error: e.message }
+ let message = "can't open '%(filename)s': %(error)s"
+ throw new ArgumentTypeError(sub(message, args))
+ }
+
+ let options = Object.assign({ fd, flags: this._flags }, this._options)
+ if (this._flags.includes('r')) {
+ return fs.createReadStream(undefined, options)
+ } else if (this._flags.includes('w')) {
+ return fs.createWriteStream(undefined, options)
+ } else {
+ let msg = sub('argument "%s" with mode %r', string, this._flags)
+ throw new TypeError(msg)
+ }
+ }
+
+ [util.inspect.custom]() {
+ let args = [ this._flags ]
+ let kwargs = Object.entries(this._options).map(([ k, v ]) => {
+ if (k === 'mode') v = { value: v, [util.inspect.custom]() { return '0o' + this.value.toString(8) } }
+ return [ k, v ]
+ })
+ let args_str = []
+ .concat(args.filter(arg => arg !== -1).map(repr))
+ .concat(kwargs.filter(([/*kw*/, arg]) => arg !== undefined)
+ .map(([kw, arg]) => sub('%s=%r', kw, arg)))
+ .join(', ')
+ return sub('%s(%s)', this.constructor.name, args_str)
+ }
+
+ toString() {
+ return this[util.inspect.custom]()
+ }
+})
+
+// ===========================
+// Optional and Positional Parsing
+// ===========================
+const Namespace = _callable(class Namespace extends _AttributeHolder() {
+ /*
+ * Simple object for storing attributes.
+ *
+ * Implements equality by attribute names and values, and provides a simple
+ * string representation.
+ */
+
+ constructor(options = {}) {
+ super()
+ Object.assign(this, options)
+ }
+})
+
+// unset string tag to mimic plain object
+Namespace.prototype[Symbol.toStringTag] = undefined
+
+
+const _ActionsContainer = _camelcase_alias(_callable(class _ActionsContainer {
+
+ constructor() {
+ let [
+ description,
+ prefix_chars,
+ argument_default,
+ conflict_handler
+ ] = _parse_opts(arguments, {
+ description: no_default,
+ prefix_chars: no_default,
+ argument_default: no_default,
+ conflict_handler: no_default
+ })
+
+ this.description = description
+ this.argument_default = argument_default
+ this.prefix_chars = prefix_chars
+ this.conflict_handler = conflict_handler
+
+ // set up registries
+ this._registries = {}
+
+ // register actions
+ this.register('action', undefined, _StoreAction)
+ this.register('action', 'store', _StoreAction)
+ this.register('action', 'store_const', _StoreConstAction)
+ this.register('action', 'store_true', _StoreTrueAction)
+ this.register('action', 'store_false', _StoreFalseAction)
+ this.register('action', 'append', _AppendAction)
+ this.register('action', 'append_const', _AppendConstAction)
+ this.register('action', 'count', _CountAction)
+ this.register('action', 'help', _HelpAction)
+ this.register('action', 'version', _VersionAction)
+ this.register('action', 'parsers', _SubParsersAction)
+ this.register('action', 'extend', _ExtendAction)
+ // LEGACY (v1 compatibility): camelcase variants
+ ;[ 'storeConst', 'storeTrue', 'storeFalse', 'appendConst' ].forEach(old_name => {
+ let new_name = _to_new_name(old_name)
+ this.register('action', old_name, util.deprecate(this._registry_get('action', new_name),
+ sub('{action: "%s"} is renamed to {action: "%s"}', old_name, new_name)))
+ })
+ // end
+
+ // raise an exception if the conflict handler is invalid
+ this._get_handler()
+
+ // action storage
+ this._actions = []
+ this._option_string_actions = {}
+
+ // groups
+ this._action_groups = []
+ this._mutually_exclusive_groups = []
+
+ // defaults storage
+ this._defaults = {}
+
+ // determines whether an "option" looks like a negative number
+ this._negative_number_matcher = /^-\d+$|^-\d*\.\d+$/
+
+ // whether or not there are any optionals that look like negative
+ // numbers -- uses a list so it can be shared and edited
+ this._has_negative_number_optionals = []
+ }
+
+ // ====================
+ // Registration methods
+ // ====================
+ register(registry_name, value, object) {
+ let registry = setdefault(this._registries, registry_name, {})
+ registry[value] = object
+ }
+
+ _registry_get(registry_name, value, default_value = undefined) {
+ return getattr(this._registries[registry_name], value, default_value)
+ }
+
+ // ==================================
+ // Namespace default accessor methods
+ // ==================================
+ set_defaults(kwargs) {
+ Object.assign(this._defaults, kwargs)
+
+ // if these defaults match any existing arguments, replace
+ // the previous default on the object with the new one
+ for (let action of this._actions) {
+ if (action.dest in kwargs) {
+ action.default = kwargs[action.dest]
+ }
+ }
+ }
+
+ get_default(dest) {
+ for (let action of this._actions) {
+ if (action.dest === dest && action.default !== undefined) {
+ return action.default
+ }
+ }
+ return this._defaults[dest]
+ }
+
+
+ // =======================
+ // Adding argument actions
+ // =======================
+ add_argument() {
+ /*
+ * add_argument(dest, ..., name=value, ...)
+ * add_argument(option_string, option_string, ..., name=value, ...)
+ */
+ let [
+ args,
+ kwargs
+ ] = _parse_opts(arguments, {
+ '*args': no_default,
+ '**kwargs': no_default
+ })
+ // LEGACY (v1 compatibility), old-style add_argument([ args ], { options })
+ if (args.length === 1 && Array.isArray(args[0])) {
+ args = args[0]
+ deprecate('argument-array',
+ sub('use add_argument(%(args)s, {...}) instead of add_argument([ %(args)s ], { ... })', {
+ args: args.map(repr).join(', ')
+ }))
+ }
+ // end
+
+ // if no positional args are supplied or only one is supplied and
+ // it doesn't look like an option string, parse a positional
+ // argument
+ let chars = this.prefix_chars
+ if (!args.length || args.length === 1 && !chars.includes(args[0][0])) {
+ if (args.length && 'dest' in kwargs) {
+ throw new TypeError('dest supplied twice for positional argument')
+ }
+ kwargs = this._get_positional_kwargs(...args, kwargs)
+
+ // otherwise, we're adding an optional argument
+ } else {
+ kwargs = this._get_optional_kwargs(...args, kwargs)
+ }
+
+ // if no default was supplied, use the parser-level default
+ if (!('default' in kwargs)) {
+ let dest = kwargs.dest
+ if (dest in this._defaults) {
+ kwargs.default = this._defaults[dest]
+ } else if (this.argument_default !== undefined) {
+ kwargs.default = this.argument_default
+ }
+ }
+
+ // create the action object, and add it to the parser
+ let action_class = this._pop_action_class(kwargs)
+ if (typeof action_class !== 'function') {
+ throw new TypeError(sub('unknown action "%s"', action_class))
+ }
+ // eslint-disable-next-line new-cap
+ let action = new action_class(kwargs)
+
+ // raise an error if the action type is not callable
+ let type_func = this._registry_get('type', action.type, action.type)
+ if (typeof type_func !== 'function') {
+ throw new TypeError(sub('%r is not callable', type_func))
+ }
+
+ if (type_func === FileType) {
+ throw new TypeError(sub('%r is a FileType class object, instance of it' +
+ ' must be passed', type_func))
+ }
+
+ // raise an error if the metavar does not match the type
+ if ('_get_formatter' in this) {
+ try {
+ this._get_formatter()._format_args(action, undefined)
+ } catch (err) {
+ // check for 'invalid nargs value' is an artifact of TypeError and ValueError in js being the same
+ if (err instanceof TypeError && err.message !== 'invalid nargs value') {
+ throw new TypeError('length of metavar tuple does not match nargs')
+ } else {
+ throw err
+ }
+ }
+ }
+
+ return this._add_action(action)
+ }
+
+ add_argument_group() {
+ let group = _ArgumentGroup(this, ...arguments)
+ this._action_groups.push(group)
+ return group
+ }
+
+ add_mutually_exclusive_group() {
+ // eslint-disable-next-line no-use-before-define
+ let group = _MutuallyExclusiveGroup(this, ...arguments)
+ this._mutually_exclusive_groups.push(group)
+ return group
+ }
+
+ _add_action(action) {
+ // resolve any conflicts
+ this._check_conflict(action)
+
+ // add to actions list
+ this._actions.push(action)
+ action.container = this
+
+ // index the action by any option strings it has
+ for (let option_string of action.option_strings) {
+ this._option_string_actions[option_string] = action
+ }
+
+ // set the flag if any option strings look like negative numbers
+ for (let option_string of action.option_strings) {
+ if (this._negative_number_matcher.test(option_string)) {
+ if (!this._has_negative_number_optionals.length) {
+ this._has_negative_number_optionals.push(true)
+ }
+ }
+ }
+
+ // return the created action
+ return action
+ }
+
+ _remove_action(action) {
+ _array_remove(this._actions, action)
+ }
+
+ _add_container_actions(container) {
+ // collect groups by titles
+ let title_group_map = {}
+ for (let group of this._action_groups) {
+ if (group.title in title_group_map) {
+ let msg = 'cannot merge actions - two groups are named %r'
+ throw new TypeError(sub(msg, group.title))
+ }
+ title_group_map[group.title] = group
+ }
+
+ // map each action to its group
+ let group_map = new Map()
+ for (let group of container._action_groups) {
+
+ // if a group with the title exists, use that, otherwise
+ // create a new group matching the container's group
+ if (!(group.title in title_group_map)) {
+ title_group_map[group.title] = this.add_argument_group({
+ title: group.title,
+ description: group.description,
+ conflict_handler: group.conflict_handler
+ })
+ }
+
+ // map the actions to their new group
+ for (let action of group._group_actions) {
+ group_map.set(action, title_group_map[group.title])
+ }
+ }
+
+ // add container's mutually exclusive groups
+ // NOTE: if add_mutually_exclusive_group ever gains title= and
+ // description= then this code will need to be expanded as above
+ for (let group of container._mutually_exclusive_groups) {
+ let mutex_group = this.add_mutually_exclusive_group({
+ required: group.required
+ })
+
+ // map the actions to their new mutex group
+ for (let action of group._group_actions) {
+ group_map.set(action, mutex_group)
+ }
+ }
+
+ // add all actions to this container or their group
+ for (let action of container._actions) {
+ group_map.get(action)._add_action(action)
+ }
+ }
+
+ _get_positional_kwargs() {
+ let [
+ dest,
+ kwargs
+ ] = _parse_opts(arguments, {
+ dest: no_default,
+ '**kwargs': no_default
+ })
+
+ // make sure required is not specified
+ if ('required' in kwargs) {
+ let msg = "'required' is an invalid argument for positionals"
+ throw new TypeError(msg)
+ }
+
+ // mark positional arguments as required if at least one is
+ // always required
+ if (![OPTIONAL, ZERO_OR_MORE].includes(kwargs.nargs)) {
+ kwargs.required = true
+ }
+ if (kwargs.nargs === ZERO_OR_MORE && !('default' in kwargs)) {
+ kwargs.required = true
+ }
+
+ // return the keyword arguments with no option strings
+ return Object.assign(kwargs, { dest, option_strings: [] })
+ }
+
+ _get_optional_kwargs() {
+ let [
+ args,
+ kwargs
+ ] = _parse_opts(arguments, {
+ '*args': no_default,
+ '**kwargs': no_default
+ })
+
+ // determine short and long option strings
+ let option_strings = []
+ let long_option_strings = []
+ let option_string
+ for (option_string of args) {
+ // error on strings that don't start with an appropriate prefix
+ if (!this.prefix_chars.includes(option_string[0])) {
+ let args = {option: option_string,
+ prefix_chars: this.prefix_chars}
+ let msg = 'invalid option string %(option)r: ' +
+ 'must start with a character %(prefix_chars)r'
+ throw new TypeError(sub(msg, args))
+ }
+
+ // strings starting with two prefix characters are long options
+ option_strings.push(option_string)
+ if (option_string.length > 1 && this.prefix_chars.includes(option_string[1])) {
+ long_option_strings.push(option_string)
+ }
+ }
+
+ // infer destination, '--foo-bar' -> 'foo_bar' and '-x' -> 'x'
+ let dest = kwargs.dest
+ delete kwargs.dest
+ if (dest === undefined) {
+ let dest_option_string
+ if (long_option_strings.length) {
+ dest_option_string = long_option_strings[0]
+ } else {
+ dest_option_string = option_strings[0]
+ }
+ dest = _string_lstrip(dest_option_string, this.prefix_chars)
+ if (!dest) {
+ let msg = 'dest= is required for options like %r'
+ throw new TypeError(sub(msg, option_string))
+ }
+ dest = dest.replace(/-/g, '_')
+ }
+
+ // return the updated keyword arguments
+ return Object.assign(kwargs, { dest, option_strings })
+ }
+
+ _pop_action_class(kwargs, default_value = undefined) {
+ let action = getattr(kwargs, 'action', default_value)
+ delete kwargs.action
+ return this._registry_get('action', action, action)
+ }
+
+ _get_handler() {
+ // determine function from conflict handler string
+ let handler_func_name = sub('_handle_conflict_%s', this.conflict_handler)
+ if (typeof this[handler_func_name] === 'function') {
+ return this[handler_func_name]
+ } else {
+ let msg = 'invalid conflict_resolution value: %r'
+ throw new TypeError(sub(msg, this.conflict_handler))
+ }
+ }
+
+ _check_conflict(action) {
+
+ // find all options that conflict with this option
+ let confl_optionals = []
+ for (let option_string of action.option_strings) {
+ if (hasattr(this._option_string_actions, option_string)) {
+ let confl_optional = this._option_string_actions[option_string]
+ confl_optionals.push([ option_string, confl_optional ])
+ }
+ }
+
+ // resolve any conflicts
+ if (confl_optionals.length) {
+ let conflict_handler = this._get_handler()
+ conflict_handler.call(this, action, confl_optionals)
+ }
+ }
+
+ _handle_conflict_error(action, conflicting_actions) {
+ let message = conflicting_actions.length === 1 ?
+ 'conflicting option string: %s' :
+ 'conflicting option strings: %s'
+ let conflict_string = conflicting_actions.map(([ option_string/*, action*/ ]) => option_string).join(', ')
+ throw new ArgumentError(action, sub(message, conflict_string))
+ }
+
+ _handle_conflict_resolve(action, conflicting_actions) {
+
+ // remove all conflicting options
+ for (let [ option_string, action ] of conflicting_actions) {
+
+ // remove the conflicting option
+ _array_remove(action.option_strings, option_string)
+ delete this._option_string_actions[option_string]
+
+ // if the option now has no option string, remove it from the
+ // container holding it
+ if (!action.option_strings.length) {
+ action.container._remove_action(action)
+ }
+ }
+ }
+}))
+
+
+const _ArgumentGroup = _callable(class _ArgumentGroup extends _ActionsContainer {
+
+ constructor() {
+ let [
+ container,
+ title,
+ description,
+ kwargs
+ ] = _parse_opts(arguments, {
+ container: no_default,
+ title: undefined,
+ description: undefined,
+ '**kwargs': no_default
+ })
+
+ // add any missing keyword arguments by checking the container
+ setdefault(kwargs, 'conflict_handler', container.conflict_handler)
+ setdefault(kwargs, 'prefix_chars', container.prefix_chars)
+ setdefault(kwargs, 'argument_default', container.argument_default)
+ super(Object.assign({ description }, kwargs))
+
+ // group attributes
+ this.title = title
+ this._group_actions = []
+
+ // share most attributes with the container
+ this._registries = container._registries
+ this._actions = container._actions
+ this._option_string_actions = container._option_string_actions
+ this._defaults = container._defaults
+ this._has_negative_number_optionals =
+ container._has_negative_number_optionals
+ this._mutually_exclusive_groups = container._mutually_exclusive_groups
+ }
+
+ _add_action(action) {
+ action = super._add_action(action)
+ this._group_actions.push(action)
+ return action
+ }
+
+ _remove_action(action) {
+ super._remove_action(action)
+ _array_remove(this._group_actions, action)
+ }
+})
+
+
+const _MutuallyExclusiveGroup = _callable(class _MutuallyExclusiveGroup extends _ArgumentGroup {
+
+ constructor() {
+ let [
+ container,
+ required
+ ] = _parse_opts(arguments, {
+ container: no_default,
+ required: false
+ })
+
+ super(container)
+ this.required = required
+ this._container = container
+ }
+
+ _add_action(action) {
+ if (action.required) {
+ let msg = 'mutually exclusive arguments must be optional'
+ throw new TypeError(msg)
+ }
+ action = this._container._add_action(action)
+ this._group_actions.push(action)
+ return action
+ }
+
+ _remove_action(action) {
+ this._container._remove_action(action)
+ _array_remove(this._group_actions, action)
+ }
+})
+
+
+const ArgumentParser = _camelcase_alias(_callable(class ArgumentParser extends _AttributeHolder(_ActionsContainer) {
+ /*
+ * Object for parsing command line strings into Python objects.
+ *
+ * Keyword Arguments:
+ * - prog -- The name of the program (default: sys.argv[0])
+ * - usage -- A usage message (default: auto-generated from arguments)
+ * - description -- A description of what the program does
+ * - epilog -- Text following the argument descriptions
+ * - parents -- Parsers whose arguments should be copied into this one
+ * - formatter_class -- HelpFormatter class for printing help messages
+ * - prefix_chars -- Characters that prefix optional arguments
+ * - fromfile_prefix_chars -- Characters that prefix files containing
+ * additional arguments
+ * - argument_default -- The default value for all arguments
+ * - conflict_handler -- String indicating how to handle conflicts
+ * - add_help -- Add a -h/-help option
+ * - allow_abbrev -- Allow long options to be abbreviated unambiguously
+ * - exit_on_error -- Determines whether or not ArgumentParser exits with
+ * error info when an error occurs
+ */
+
+ constructor() {
+ let [
+ prog,
+ usage,
+ description,
+ epilog,
+ parents,
+ formatter_class,
+ prefix_chars,
+ fromfile_prefix_chars,
+ argument_default,
+ conflict_handler,
+ add_help,
+ allow_abbrev,
+ exit_on_error,
+ debug, // LEGACY (v1 compatibility), debug mode
+ version // LEGACY (v1 compatibility), version
+ ] = _parse_opts(arguments, {
+ prog: undefined,
+ usage: undefined,
+ description: undefined,
+ epilog: undefined,
+ parents: [],
+ formatter_class: HelpFormatter,
+ prefix_chars: '-',
+ fromfile_prefix_chars: undefined,
+ argument_default: undefined,
+ conflict_handler: 'error',
+ add_help: true,
+ allow_abbrev: true,
+ exit_on_error: true,
+ debug: undefined, // LEGACY (v1 compatibility), debug mode
+ version: undefined // LEGACY (v1 compatibility), version
+ })
+
+ // LEGACY (v1 compatibility)
+ if (debug !== undefined) {
+ deprecate('debug',
+ 'The "debug" argument to ArgumentParser is deprecated. Please ' +
+ 'override ArgumentParser.exit function instead.'
+ )
+ }
+
+ if (version !== undefined) {
+ deprecate('version',
+ 'The "version" argument to ArgumentParser is deprecated. Please use ' +
+ "add_argument(..., { action: 'version', version: 'N', ... }) instead."
+ )
+ }
+ // end
+
+ super({
+ description,
+ prefix_chars,
+ argument_default,
+ conflict_handler
+ })
+
+ // default setting for prog
+ if (prog === undefined) {
+ prog = path.basename(get_argv()[0] || '')
+ }
+
+ this.prog = prog
+ this.usage = usage
+ this.epilog = epilog
+ this.formatter_class = formatter_class
+ this.fromfile_prefix_chars = fromfile_prefix_chars
+ this.add_help = add_help
+ this.allow_abbrev = allow_abbrev
+ this.exit_on_error = exit_on_error
+ // LEGACY (v1 compatibility), debug mode
+ this.debug = debug
+ // end
+
+ this._positionals = this.add_argument_group('positional arguments')
+ this._optionals = this.add_argument_group('optional arguments')
+ this._subparsers = undefined
+
+ // register types
+ function identity(string) {
+ return string
+ }
+ this.register('type', undefined, identity)
+ this.register('type', null, identity)
+ this.register('type', 'auto', identity)
+ this.register('type', 'int', function (x) {
+ let result = Number(x)
+ if (!Number.isInteger(result)) {
+ throw new TypeError(sub('could not convert string to int: %r', x))
+ }
+ return result
+ })
+ this.register('type', 'float', function (x) {
+ let result = Number(x)
+ if (isNaN(result)) {
+ throw new TypeError(sub('could not convert string to float: %r', x))
+ }
+ return result
+ })
+ this.register('type', 'str', String)
+ // LEGACY (v1 compatibility): custom types
+ this.register('type', 'string',
+ util.deprecate(String, 'use {type:"str"} or {type:String} instead of {type:"string"}'))
+ // end
+
+ // add help argument if necessary
+ // (using explicit default to override global argument_default)
+ let default_prefix = prefix_chars.includes('-') ? '-' : prefix_chars[0]
+ if (this.add_help) {
+ this.add_argument(
+ default_prefix + 'h',
+ default_prefix.repeat(2) + 'help',
+ {
+ action: 'help',
+ default: SUPPRESS,
+ help: 'show this help message and exit'
+ }
+ )
+ }
+ // LEGACY (v1 compatibility), version
+ if (version) {
+ this.add_argument(
+ default_prefix + 'v',
+ default_prefix.repeat(2) + 'version',
+ {
+ action: 'version',
+ default: SUPPRESS,
+ version: this.version,
+ help: "show program's version number and exit"
+ }
+ )
+ }
+ // end
+
+ // add parent arguments and defaults
+ for (let parent of parents) {
+ this._add_container_actions(parent)
+ Object.assign(this._defaults, parent._defaults)
+ }
+ }
+
+ // =======================
+ // Pretty __repr__ methods
+ // =======================
+ _get_kwargs() {
+ let names = [
+ 'prog',
+ 'usage',
+ 'description',
+ 'formatter_class',
+ 'conflict_handler',
+ 'add_help'
+ ]
+ return names.map(name => [ name, getattr(this, name) ])
+ }
+
+ // ==================================
+ // Optional/Positional adding methods
+ // ==================================
+ add_subparsers() {
+ let [
+ kwargs
+ ] = _parse_opts(arguments, {
+ '**kwargs': no_default
+ })
+
+ if (this._subparsers !== undefined) {
+ this.error('cannot have multiple subparser arguments')
+ }
+
+ // add the parser class to the arguments if it's not present
+ setdefault(kwargs, 'parser_class', this.constructor)
+
+ if ('title' in kwargs || 'description' in kwargs) {
+ let title = getattr(kwargs, 'title', 'subcommands')
+ let description = getattr(kwargs, 'description', undefined)
+ delete kwargs.title
+ delete kwargs.description
+ this._subparsers = this.add_argument_group(title, description)
+ } else {
+ this._subparsers = this._positionals
+ }
+
+ // prog defaults to the usage message of this parser, skipping
+ // optional arguments and with no "usage:" prefix
+ if (kwargs.prog === undefined) {
+ let formatter = this._get_formatter()
+ let positionals = this._get_positional_actions()
+ let groups = this._mutually_exclusive_groups
+ formatter.add_usage(this.usage, positionals, groups, '')
+ kwargs.prog = formatter.format_help().trim()
+ }
+
+ // create the parsers action and add it to the positionals list
+ let parsers_class = this._pop_action_class(kwargs, 'parsers')
+ // eslint-disable-next-line new-cap
+ let action = new parsers_class(Object.assign({ option_strings: [] }, kwargs))
+ this._subparsers._add_action(action)
+
+ // return the created parsers action
+ return action
+ }
+
+ _add_action(action) {
+ if (action.option_strings.length) {
+ this._optionals._add_action(action)
+ } else {
+ this._positionals._add_action(action)
+ }
+ return action
+ }
+
+ _get_optional_actions() {
+ return this._actions.filter(action => action.option_strings.length)
+ }
+
+ _get_positional_actions() {
+ return this._actions.filter(action => !action.option_strings.length)
+ }
+
+ // =====================================
+ // Command line argument parsing methods
+ // =====================================
+ parse_args(args = undefined, namespace = undefined) {
+ let argv
+ [ args, argv ] = this.parse_known_args(args, namespace)
+ if (argv && argv.length > 0) {
+ let msg = 'unrecognized arguments: %s'
+ this.error(sub(msg, argv.join(' ')))
+ }
+ return args
+ }
+
+ parse_known_args(args = undefined, namespace = undefined) {
+ if (args === undefined) {
+ args = get_argv().slice(1)
+ }
+
+ // default Namespace built from parser defaults
+ if (namespace === undefined) {
+ namespace = new Namespace()
+ }
+
+ // add any action defaults that aren't present
+ for (let action of this._actions) {
+ if (action.dest !== SUPPRESS) {
+ if (!hasattr(namespace, action.dest)) {
+ if (action.default !== SUPPRESS) {
+ setattr(namespace, action.dest, action.default)
+ }
+ }
+ }
+ }
+
+ // add any parser defaults that aren't present
+ for (let dest of Object.keys(this._defaults)) {
+ if (!hasattr(namespace, dest)) {
+ setattr(namespace, dest, this._defaults[dest])
+ }
+ }
+
+ // parse the arguments and exit if there are any errors
+ if (this.exit_on_error) {
+ try {
+ [ namespace, args ] = this._parse_known_args(args, namespace)
+ } catch (err) {
+ if (err instanceof ArgumentError) {
+ this.error(err.message)
+ } else {
+ throw err
+ }
+ }
+ } else {
+ [ namespace, args ] = this._parse_known_args(args, namespace)
+ }
+
+ if (hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR)) {
+ args = args.concat(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
+ delattr(namespace, _UNRECOGNIZED_ARGS_ATTR)
+ }
+
+ return [ namespace, args ]
+ }
+
+ _parse_known_args(arg_strings, namespace) {
+ // replace arg strings that are file references
+ if (this.fromfile_prefix_chars !== undefined) {
+ arg_strings = this._read_args_from_files(arg_strings)
+ }
+
+ // map all mutually exclusive arguments to the other arguments
+ // they can't occur with
+ let action_conflicts = new Map()
+ for (let mutex_group of this._mutually_exclusive_groups) {
+ let group_actions = mutex_group._group_actions
+ for (let [ i, mutex_action ] of Object.entries(mutex_group._group_actions)) {
+ let conflicts = action_conflicts.get(mutex_action) || []
+ conflicts = conflicts.concat(group_actions.slice(0, +i))
+ conflicts = conflicts.concat(group_actions.slice(+i + 1))
+ action_conflicts.set(mutex_action, conflicts)
+ }
+ }
+
+ // find all option indices, and determine the arg_string_pattern
+ // which has an 'O' if there is an option at an index,
+ // an 'A' if there is an argument, or a '-' if there is a '--'
+ let option_string_indices = {}
+ let arg_string_pattern_parts = []
+ let arg_strings_iter = Object.entries(arg_strings)[Symbol.iterator]()
+ for (let [ i, arg_string ] of arg_strings_iter) {
+
+ // all args after -- are non-options
+ if (arg_string === '--') {
+ arg_string_pattern_parts.push('-')
+ for ([ i, arg_string ] of arg_strings_iter) {
+ arg_string_pattern_parts.push('A')
+ }
+
+ // otherwise, add the arg to the arg strings
+ // and note the index if it was an option
+ } else {
+ let option_tuple = this._parse_optional(arg_string)
+ let pattern
+ if (option_tuple === undefined) {
+ pattern = 'A'
+ } else {
+ option_string_indices[i] = option_tuple
+ pattern = 'O'
+ }
+ arg_string_pattern_parts.push(pattern)
+ }
+ }
+
+ // join the pieces together to form the pattern
+ let arg_strings_pattern = arg_string_pattern_parts.join('')
+
+ // converts arg strings to the appropriate and then takes the action
+ let seen_actions = new Set()
+ let seen_non_default_actions = new Set()
+ let extras
+
+ let take_action = (action, argument_strings, option_string = undefined) => {
+ seen_actions.add(action)
+ let argument_values = this._get_values(action, argument_strings)
+
+ // error if this argument is not allowed with other previously
+ // seen arguments, assuming that actions that use the default
+ // value don't really count as "present"
+ if (argument_values !== action.default) {
+ seen_non_default_actions.add(action)
+ for (let conflict_action of action_conflicts.get(action) || []) {
+ if (seen_non_default_actions.has(conflict_action)) {
+ let msg = 'not allowed with argument %s'
+ let action_name = _get_action_name(conflict_action)
+ throw new ArgumentError(action, sub(msg, action_name))
+ }
+ }
+ }
+
+ // take the action if we didn't receive a SUPPRESS value
+ // (e.g. from a default)
+ if (argument_values !== SUPPRESS) {
+ action(this, namespace, argument_values, option_string)
+ }
+ }
+
+ // function to convert arg_strings into an optional action
+ let consume_optional = start_index => {
+
+ // get the optional identified at this index
+ let option_tuple = option_string_indices[start_index]
+ let [ action, option_string, explicit_arg ] = option_tuple
+
+ // identify additional optionals in the same arg string
+ // (e.g. -xyz is the same as -x -y -z if no args are required)
+ let action_tuples = []
+ let stop
+ for (;;) {
+
+ // if we found no optional action, skip it
+ if (action === undefined) {
+ extras.push(arg_strings[start_index])
+ return start_index + 1
+ }
+
+ // if there is an explicit argument, try to match the
+ // optional's string arguments to only this
+ if (explicit_arg !== undefined) {
+ let arg_count = this._match_argument(action, 'A')
+
+ // if the action is a single-dash option and takes no
+ // arguments, try to parse more single-dash options out
+ // of the tail of the option string
+ let chars = this.prefix_chars
+ if (arg_count === 0 && !chars.includes(option_string[1])) {
+ action_tuples.push([ action, [], option_string ])
+ let char = option_string[0]
+ option_string = char + explicit_arg[0]
+ let new_explicit_arg = explicit_arg.slice(1) || undefined
+ let optionals_map = this._option_string_actions
+ if (hasattr(optionals_map, option_string)) {
+ action = optionals_map[option_string]
+ explicit_arg = new_explicit_arg
+ } else {
+ let msg = 'ignored explicit argument %r'
+ throw new ArgumentError(action, sub(msg, explicit_arg))
+ }
+
+ // if the action expect exactly one argument, we've
+ // successfully matched the option; exit the loop
+ } else if (arg_count === 1) {
+ stop = start_index + 1
+ let args = [ explicit_arg ]
+ action_tuples.push([ action, args, option_string ])
+ break
+
+ // error if a double-dash option did not use the
+ // explicit argument
+ } else {
+ let msg = 'ignored explicit argument %r'
+ throw new ArgumentError(action, sub(msg, explicit_arg))
+ }
+
+ // if there is no explicit argument, try to match the
+ // optional's string arguments with the following strings
+ // if successful, exit the loop
+ } else {
+ let start = start_index + 1
+ let selected_patterns = arg_strings_pattern.slice(start)
+ let arg_count = this._match_argument(action, selected_patterns)
+ stop = start + arg_count
+ let args = arg_strings.slice(start, stop)
+ action_tuples.push([ action, args, option_string ])
+ break
+ }
+ }
+
+ // add the Optional to the list and return the index at which
+ // the Optional's string args stopped
+ assert(action_tuples.length)
+ for (let [ action, args, option_string ] of action_tuples) {
+ take_action(action, args, option_string)
+ }
+ return stop
+ }
+
+ // the list of Positionals left to be parsed; this is modified
+ // by consume_positionals()
+ let positionals = this._get_positional_actions()
+
+ // function to convert arg_strings into positional actions
+ let consume_positionals = start_index => {
+ // match as many Positionals as possible
+ let selected_pattern = arg_strings_pattern.slice(start_index)
+ let arg_counts = this._match_arguments_partial(positionals, selected_pattern)
+
+ // slice off the appropriate arg strings for each Positional
+ // and add the Positional and its args to the list
+ for (let i = 0; i < positionals.length && i < arg_counts.length; i++) {
+ let action = positionals[i]
+ let arg_count = arg_counts[i]
+ let args = arg_strings.slice(start_index, start_index + arg_count)
+ start_index += arg_count
+ take_action(action, args)
+ }
+
+ // slice off the Positionals that we just parsed and return the
+ // index at which the Positionals' string args stopped
+ positionals = positionals.slice(arg_counts.length)
+ return start_index
+ }
+
+ // consume Positionals and Optionals alternately, until we have
+ // passed the last option string
+ extras = []
+ let start_index = 0
+ let max_option_string_index = Math.max(-1, ...Object.keys(option_string_indices).map(Number))
+ while (start_index <= max_option_string_index) {
+
+ // consume any Positionals preceding the next option
+ let next_option_string_index = Math.min(
+ // eslint-disable-next-line no-loop-func
+ ...Object.keys(option_string_indices).map(Number).filter(index => index >= start_index)
+ )
+ if (start_index !== next_option_string_index) {
+ let positionals_end_index = consume_positionals(start_index)
+
+ // only try to parse the next optional if we didn't consume
+ // the option string during the positionals parsing
+ if (positionals_end_index > start_index) {
+ start_index = positionals_end_index
+ continue
+ } else {
+ start_index = positionals_end_index
+ }
+ }
+
+ // if we consumed all the positionals we could and we're not
+ // at the index of an option string, there were extra arguments
+ if (!(start_index in option_string_indices)) {
+ let strings = arg_strings.slice(start_index, next_option_string_index)
+ extras = extras.concat(strings)
+ start_index = next_option_string_index
+ }
+
+ // consume the next optional and any arguments for it
+ start_index = consume_optional(start_index)
+ }
+
+ // consume any positionals following the last Optional
+ let stop_index = consume_positionals(start_index)
+
+ // if we didn't consume all the argument strings, there were extras
+ extras = extras.concat(arg_strings.slice(stop_index))
+
+ // make sure all required actions were present and also convert
+ // action defaults which were not given as arguments
+ let required_actions = []
+ for (let action of this._actions) {
+ if (!seen_actions.has(action)) {
+ if (action.required) {
+ required_actions.push(_get_action_name(action))
+ } else {
+ // Convert action default now instead of doing it before
+ // parsing arguments to avoid calling convert functions
+ // twice (which may fail) if the argument was given, but
+ // only if it was defined already in the namespace
+ if (action.default !== undefined &&
+ typeof action.default === 'string' &&
+ hasattr(namespace, action.dest) &&
+ action.default === getattr(namespace, action.dest)) {
+ setattr(namespace, action.dest,
+ this._get_value(action, action.default))
+ }
+ }
+ }
+ }
+
+ if (required_actions.length) {
+ this.error(sub('the following arguments are required: %s',
+ required_actions.join(', ')))
+ }
+
+ // make sure all required groups had one option present
+ for (let group of this._mutually_exclusive_groups) {
+ if (group.required) {
+ let no_actions_used = true
+ for (let action of group._group_actions) {
+ if (seen_non_default_actions.has(action)) {
+ no_actions_used = false
+ break
+ }
+ }
+
+ // if no actions were used, report the error
+ if (no_actions_used) {
+ let names = group._group_actions
+ .filter(action => action.help !== SUPPRESS)
+ .map(action => _get_action_name(action))
+ let msg = 'one of the arguments %s is required'
+ this.error(sub(msg, names.join(' ')))
+ }
+ }
+ }
+
+ // return the updated namespace and the extra arguments
+ return [ namespace, extras ]
+ }
+
+ _read_args_from_files(arg_strings) {
+ // expand arguments referencing files
+ let new_arg_strings = []
+ for (let arg_string of arg_strings) {
+
+ // for regular arguments, just add them back into the list
+ if (!arg_string || !this.fromfile_prefix_chars.includes(arg_string[0])) {
+ new_arg_strings.push(arg_string)
+
+ // replace arguments referencing files with the file content
+ } else {
+ try {
+ let args_file = fs.readFileSync(arg_string.slice(1), 'utf8')
+ let arg_strings = []
+ for (let arg_line of splitlines(args_file)) {
+ for (let arg of this.convert_arg_line_to_args(arg_line)) {
+ arg_strings.push(arg)
+ }
+ }
+ arg_strings = this._read_args_from_files(arg_strings)
+ new_arg_strings = new_arg_strings.concat(arg_strings)
+ } catch (err) {
+ this.error(err.message)
+ }
+ }
+ }
+
+ // return the modified argument list
+ return new_arg_strings
+ }
+
+ convert_arg_line_to_args(arg_line) {
+ return [arg_line]
+ }
+
+ _match_argument(action, arg_strings_pattern) {
+ // match the pattern for this action to the arg strings
+ let nargs_pattern = this._get_nargs_pattern(action)
+ let match = arg_strings_pattern.match(new RegExp('^' + nargs_pattern))
+
+ // raise an exception if we weren't able to find a match
+ if (match === null) {
+ let nargs_errors = {
+ undefined: 'expected one argument',
+ [OPTIONAL]: 'expected at most one argument',
+ [ONE_OR_MORE]: 'expected at least one argument'
+ }
+ let msg = nargs_errors[action.nargs]
+ if (msg === undefined) {
+ msg = sub(action.nargs === 1 ? 'expected %s argument' : 'expected %s arguments', action.nargs)
+ }
+ throw new ArgumentError(action, msg)
+ }
+
+ // return the number of arguments matched
+ return match[1].length
+ }
+
+ _match_arguments_partial(actions, arg_strings_pattern) {
+ // progressively shorten the actions list by slicing off the
+ // final actions until we find a match
+ let result = []
+ for (let i of range(actions.length, 0, -1)) {
+ let actions_slice = actions.slice(0, i)
+ let pattern = actions_slice.map(action => this._get_nargs_pattern(action)).join('')
+ let match = arg_strings_pattern.match(new RegExp('^' + pattern))
+ if (match !== null) {
+ result = result.concat(match.slice(1).map(string => string.length))
+ break
+ }
+ }
+
+ // return the list of arg string counts
+ return result
+ }
+
+ _parse_optional(arg_string) {
+ // if it's an empty string, it was meant to be a positional
+ if (!arg_string) {
+ return undefined
+ }
+
+ // if it doesn't start with a prefix, it was meant to be positional
+ if (!this.prefix_chars.includes(arg_string[0])) {
+ return undefined
+ }
+
+ // if the option string is present in the parser, return the action
+ if (arg_string in this._option_string_actions) {
+ let action = this._option_string_actions[arg_string]
+ return [ action, arg_string, undefined ]
+ }
+
+ // if it's just a single character, it was meant to be positional
+ if (arg_string.length === 1) {
+ return undefined
+ }
+
+ // if the option string before the "=" is present, return the action
+ if (arg_string.includes('=')) {
+ let [ option_string, explicit_arg ] = _string_split(arg_string, '=', 1)
+ if (option_string in this._option_string_actions) {
+ let action = this._option_string_actions[option_string]
+ return [ action, option_string, explicit_arg ]
+ }
+ }
+
+ // search through all possible prefixes of the option string
+ // and all actions in the parser for possible interpretations
+ let option_tuples = this._get_option_tuples(arg_string)
+
+ // if multiple actions match, the option string was ambiguous
+ if (option_tuples.length > 1) {
+ let options = option_tuples.map(([ /*action*/, option_string/*, explicit_arg*/ ]) => option_string).join(', ')
+ let args = {option: arg_string, matches: options}
+ let msg = 'ambiguous option: %(option)s could match %(matches)s'
+ this.error(sub(msg, args))
+
+ // if exactly one action matched, this segmentation is good,
+ // so return the parsed action
+ } else if (option_tuples.length === 1) {
+ let [ option_tuple ] = option_tuples
+ return option_tuple
+ }
+
+ // if it was not found as an option, but it looks like a negative
+ // number, it was meant to be positional
+ // unless there are negative-number-like options
+ if (this._negative_number_matcher.test(arg_string)) {
+ if (!this._has_negative_number_optionals.length) {
+ return undefined
+ }
+ }
+
+ // if it contains a space, it was meant to be a positional
+ if (arg_string.includes(' ')) {
+ return undefined
+ }
+
+ // it was meant to be an optional but there is no such option
+ // in this parser (though it might be a valid option in a subparser)
+ return [ undefined, arg_string, undefined ]
+ }
+
+ _get_option_tuples(option_string) {
+ let result = []
+
+ // option strings starting with two prefix characters are only
+ // split at the '='
+ let chars = this.prefix_chars
+ if (chars.includes(option_string[0]) && chars.includes(option_string[1])) {
+ if (this.allow_abbrev) {
+ let option_prefix, explicit_arg
+ if (option_string.includes('=')) {
+ [ option_prefix, explicit_arg ] = _string_split(option_string, '=', 1)
+ } else {
+ option_prefix = option_string
+ explicit_arg = undefined
+ }
+ for (let option_string of Object.keys(this._option_string_actions)) {
+ if (option_string.startsWith(option_prefix)) {
+ let action = this._option_string_actions[option_string]
+ let tup = [ action, option_string, explicit_arg ]
+ result.push(tup)
+ }
+ }
+ }
+
+ // single character options can be concatenated with their arguments
+ // but multiple character options always have to have their argument
+ // separate
+ } else if (chars.includes(option_string[0]) && !chars.includes(option_string[1])) {
+ let option_prefix = option_string
+ let explicit_arg = undefined
+ let short_option_prefix = option_string.slice(0, 2)
+ let short_explicit_arg = option_string.slice(2)
+
+ for (let option_string of Object.keys(this._option_string_actions)) {
+ if (option_string === short_option_prefix) {
+ let action = this._option_string_actions[option_string]
+ let tup = [ action, option_string, short_explicit_arg ]
+ result.push(tup)
+ } else if (option_string.startsWith(option_prefix)) {
+ let action = this._option_string_actions[option_string]
+ let tup = [ action, option_string, explicit_arg ]
+ result.push(tup)
+ }
+ }
+
+ // shouldn't ever get here
+ } else {
+ this.error(sub('unexpected option string: %s', option_string))
+ }
+
+ // return the collected option tuples
+ return result
+ }
+
+ _get_nargs_pattern(action) {
+ // in all examples below, we have to allow for '--' args
+ // which are represented as '-' in the pattern
+ let nargs = action.nargs
+ let nargs_pattern
+
+ // the default (None) is assumed to be a single argument
+ if (nargs === undefined) {
+ nargs_pattern = '(-*A-*)'
+
+ // allow zero or one arguments
+ } else if (nargs === OPTIONAL) {
+ nargs_pattern = '(-*A?-*)'
+
+ // allow zero or more arguments
+ } else if (nargs === ZERO_OR_MORE) {
+ nargs_pattern = '(-*[A-]*)'
+
+ // allow one or more arguments
+ } else if (nargs === ONE_OR_MORE) {
+ nargs_pattern = '(-*A[A-]*)'
+
+ // allow any number of options or arguments
+ } else if (nargs === REMAINDER) {
+ nargs_pattern = '([-AO]*)'
+
+ // allow one argument followed by any number of options or arguments
+ } else if (nargs === PARSER) {
+ nargs_pattern = '(-*A[-AO]*)'
+
+ // suppress action, like nargs=0
+ } else if (nargs === SUPPRESS) {
+ nargs_pattern = '(-*-*)'
+
+ // all others should be integers
+ } else {
+ nargs_pattern = sub('(-*%s-*)', 'A'.repeat(nargs).split('').join('-*'))
+ }
+
+ // if this is an optional action, -- is not allowed
+ if (action.option_strings.length) {
+ nargs_pattern = nargs_pattern.replace(/-\*/g, '')
+ nargs_pattern = nargs_pattern.replace(/-/g, '')
+ }
+
+ // return the pattern
+ return nargs_pattern
+ }
+
+ // ========================
+ // Alt command line argument parsing, allowing free intermix
+ // ========================
+
+ parse_intermixed_args(args = undefined, namespace = undefined) {
+ let argv
+ [ args, argv ] = this.parse_known_intermixed_args(args, namespace)
+ if (argv.length) {
+ let msg = 'unrecognized arguments: %s'
+ this.error(sub(msg, argv.join(' ')))
+ }
+ return args
+ }
+
+ parse_known_intermixed_args(args = undefined, namespace = undefined) {
+ // returns a namespace and list of extras
+ //
+ // positional can be freely intermixed with optionals. optionals are
+ // first parsed with all positional arguments deactivated. The 'extras'
+ // are then parsed. If the parser definition is incompatible with the
+ // intermixed assumptions (e.g. use of REMAINDER, subparsers) a
+ // TypeError is raised.
+ //
+ // positionals are 'deactivated' by setting nargs and default to
+ // SUPPRESS. This blocks the addition of that positional to the
+ // namespace
+
+ let extras
+ let positionals = this._get_positional_actions()
+ let a = positionals.filter(action => [ PARSER, REMAINDER ].includes(action.nargs))
+ if (a.length) {
+ throw new TypeError(sub('parse_intermixed_args: positional arg' +
+ ' with nargs=%s', a[0].nargs))
+ }
+
+ for (let group of this._mutually_exclusive_groups) {
+ for (let action of group._group_actions) {
+ if (positionals.includes(action)) {
+ throw new TypeError('parse_intermixed_args: positional in' +
+ ' mutuallyExclusiveGroup')
+ }
+ }
+ }
+
+ let save_usage
+ try {
+ save_usage = this.usage
+ let remaining_args
+ try {
+ if (this.usage === undefined) {
+ // capture the full usage for use in error messages
+ this.usage = this.format_usage().slice(7)
+ }
+ for (let action of positionals) {
+ // deactivate positionals
+ action.save_nargs = action.nargs
+ // action.nargs = 0
+ action.nargs = SUPPRESS
+ action.save_default = action.default
+ action.default = SUPPRESS
+ }
+ [ namespace, remaining_args ] = this.parse_known_args(args,
+ namespace)
+ for (let action of positionals) {
+ // remove the empty positional values from namespace
+ let attr = getattr(namespace, action.dest)
+ if (Array.isArray(attr) && attr.length === 0) {
+ // eslint-disable-next-line no-console
+ console.warn(sub('Do not expect %s in %s', action.dest, namespace))
+ delattr(namespace, action.dest)
+ }
+ }
+ } finally {
+ // restore nargs and usage before exiting
+ for (let action of positionals) {
+ action.nargs = action.save_nargs
+ action.default = action.save_default
+ }
+ }
+ let optionals = this._get_optional_actions()
+ try {
+ // parse positionals. optionals aren't normally required, but
+ // they could be, so make sure they aren't.
+ for (let action of optionals) {
+ action.save_required = action.required
+ action.required = false
+ }
+ for (let group of this._mutually_exclusive_groups) {
+ group.save_required = group.required
+ group.required = false
+ }
+ [ namespace, extras ] = this.parse_known_args(remaining_args,
+ namespace)
+ } finally {
+ // restore parser values before exiting
+ for (let action of optionals) {
+ action.required = action.save_required
+ }
+ for (let group of this._mutually_exclusive_groups) {
+ group.required = group.save_required
+ }
+ }
+ } finally {
+ this.usage = save_usage
+ }
+ return [ namespace, extras ]
+ }
+
+ // ========================
+ // Value conversion methods
+ // ========================
+ _get_values(action, arg_strings) {
+ // for everything but PARSER, REMAINDER args, strip out first '--'
+ if (![PARSER, REMAINDER].includes(action.nargs)) {
+ try {
+ _array_remove(arg_strings, '--')
+ } catch (err) {}
+ }
+
+ let value
+ // optional argument produces a default when not present
+ if (!arg_strings.length && action.nargs === OPTIONAL) {
+ if (action.option_strings.length) {
+ value = action.const
+ } else {
+ value = action.default
+ }
+ if (typeof value === 'string') {
+ value = this._get_value(action, value)
+ this._check_value(action, value)
+ }
+
+ // when nargs='*' on a positional, if there were no command-line
+ // args, use the default if it is anything other than None
+ } else if (!arg_strings.length && action.nargs === ZERO_OR_MORE &&
+ !action.option_strings.length) {
+ if (action.default !== undefined) {
+ value = action.default
+ } else {
+ value = arg_strings
+ }
+ this._check_value(action, value)
+
+ // single argument or optional argument produces a single value
+ } else if (arg_strings.length === 1 && [undefined, OPTIONAL].includes(action.nargs)) {
+ let arg_string = arg_strings[0]
+ value = this._get_value(action, arg_string)
+ this._check_value(action, value)
+
+ // REMAINDER arguments convert all values, checking none
+ } else if (action.nargs === REMAINDER) {
+ value = arg_strings.map(v => this._get_value(action, v))
+
+ // PARSER arguments convert all values, but check only the first
+ } else if (action.nargs === PARSER) {
+ value = arg_strings.map(v => this._get_value(action, v))
+ this._check_value(action, value[0])
+
+ // SUPPRESS argument does not put anything in the namespace
+ } else if (action.nargs === SUPPRESS) {
+ value = SUPPRESS
+
+ // all other types of nargs produce a list
+ } else {
+ value = arg_strings.map(v => this._get_value(action, v))
+ for (let v of value) {
+ this._check_value(action, v)
+ }
+ }
+
+ // return the converted value
+ return value
+ }
+
+ _get_value(action, arg_string) {
+ let type_func = this._registry_get('type', action.type, action.type)
+ if (typeof type_func !== 'function') {
+ let msg = '%r is not callable'
+ throw new ArgumentError(action, sub(msg, type_func))
+ }
+
+ // convert the value to the appropriate type
+ let result
+ try {
+ try {
+ result = type_func(arg_string)
+ } catch (err) {
+ // Dear TC39, why would you ever consider making es6 classes not callable?
+ // We had one universal interface, [[Call]], which worked for anything
+ // (with familiar this-instanceof guard for classes). Now we have two.
+ if (err instanceof TypeError &&
+ /Class constructor .* cannot be invoked without 'new'/.test(err.message)) {
+ // eslint-disable-next-line new-cap
+ result = new type_func(arg_string)
+ } else {
+ throw err
+ }
+ }
+
+ } catch (err) {
+ // ArgumentTypeErrors indicate errors
+ if (err instanceof ArgumentTypeError) {
+ //let name = getattr(action.type, 'name', repr(action.type))
+ let msg = err.message
+ throw new ArgumentError(action, msg)
+
+ // TypeErrors or ValueErrors also indicate errors
+ } else if (err instanceof TypeError) {
+ let name = getattr(action.type, 'name', repr(action.type))
+ let args = {type: name, value: arg_string}
+ let msg = 'invalid %(type)s value: %(value)r'
+ throw new ArgumentError(action, sub(msg, args))
+ } else {
+ throw err
+ }
+ }
+
+ // return the converted value
+ return result
+ }
+
+ _check_value(action, value) {
+ // converted value must be one of the choices (if specified)
+ if (action.choices !== undefined && !_choices_to_array(action.choices).includes(value)) {
+ let args = {value,
+ choices: _choices_to_array(action.choices).map(repr).join(', ')}
+ let msg = 'invalid choice: %(value)r (choose from %(choices)s)'
+ throw new ArgumentError(action, sub(msg, args))
+ }
+ }
+
+ // =======================
+ // Help-formatting methods
+ // =======================
+ format_usage() {
+ let formatter = this._get_formatter()
+ formatter.add_usage(this.usage, this._actions,
+ this._mutually_exclusive_groups)
+ return formatter.format_help()
+ }
+
+ format_help() {
+ let formatter = this._get_formatter()
+
+ // usage
+ formatter.add_usage(this.usage, this._actions,
+ this._mutually_exclusive_groups)
+
+ // description
+ formatter.add_text(this.description)
+
+ // positionals, optionals and user-defined groups
+ for (let action_group of this._action_groups) {
+ formatter.start_section(action_group.title)
+ formatter.add_text(action_group.description)
+ formatter.add_arguments(action_group._group_actions)
+ formatter.end_section()
+ }
+
+ // epilog
+ formatter.add_text(this.epilog)
+
+ // determine help from format above
+ return formatter.format_help()
+ }
+
+ _get_formatter() {
+ // eslint-disable-next-line new-cap
+ return new this.formatter_class({ prog: this.prog })
+ }
+
+ // =====================
+ // Help-printing methods
+ // =====================
+ print_usage(file = undefined) {
+ if (file === undefined) file = process.stdout
+ this._print_message(this.format_usage(), file)
+ }
+
+ print_help(file = undefined) {
+ if (file === undefined) file = process.stdout
+ this._print_message(this.format_help(), file)
+ }
+
+ _print_message(message, file = undefined) {
+ if (message) {
+ if (file === undefined) file = process.stderr
+ file.write(message)
+ }
+ }
+
+ // ===============
+ // Exiting methods
+ // ===============
+ exit(status = 0, message = undefined) {
+ if (message) {
+ this._print_message(message, process.stderr)
+ }
+ process.exit(status)
+ }
+
+ error(message) {
+ /*
+ * error(message: string)
+ *
+ * Prints a usage message incorporating the message to stderr and
+ * exits.
+ *
+ * If you override this in a subclass, it should not return -- it
+ * should either exit or raise an exception.
+ */
+
+ // LEGACY (v1 compatibility), debug mode
+ if (this.debug === true) throw new Error(message)
+ // end
+ this.print_usage(process.stderr)
+ let args = {prog: this.prog, message: message}
+ this.exit(2, sub('%(prog)s: error: %(message)s\n', args))
+ }
+}))
+
+
+module.exports = {
+ ArgumentParser,
+ ArgumentError,
+ ArgumentTypeError,
+ BooleanOptionalAction,
+ FileType,
+ HelpFormatter,
+ ArgumentDefaultsHelpFormatter,
+ RawDescriptionHelpFormatter,
+ RawTextHelpFormatter,
+ MetavarTypeHelpFormatter,
+ Namespace,
+ Action,
+ ONE_OR_MORE,
+ OPTIONAL,
+ PARSER,
+ REMAINDER,
+ SUPPRESS,
+ ZERO_OR_MORE
+}
+
+// LEGACY (v1 compatibility), Const alias
+Object.defineProperty(module.exports, 'Const', {
+ get() {
+ let result = {}
+ Object.entries({ ONE_OR_MORE, OPTIONAL, PARSER, REMAINDER, SUPPRESS, ZERO_OR_MORE }).forEach(([ n, v ]) => {
+ Object.defineProperty(result, n, {
+ get() {
+ deprecate(n, sub('use argparse.%s instead of argparse.Const.%s', n, n))
+ return v
+ }
+ })
+ })
+ Object.entries({ _UNRECOGNIZED_ARGS_ATTR }).forEach(([ n, v ]) => {
+ Object.defineProperty(result, n, {
+ get() {
+ deprecate(n, sub('argparse.Const.%s is an internal symbol and will no longer be available', n))
+ return v
+ }
+ })
+ })
+ return result
+ },
+ enumerable: false
+})
+// end
diff --git a/node_modules/argparse/package.json b/node_modules/argparse/package.json
new file mode 100644
index 0000000..647d2af
--- /dev/null
+++ b/node_modules/argparse/package.json
@@ -0,0 +1,31 @@
+{
+ "name": "argparse",
+ "description": "CLI arguments parser. Native port of python's argparse.",
+ "version": "2.0.1",
+ "keywords": [
+ "cli",
+ "parser",
+ "argparse",
+ "option",
+ "args"
+ ],
+ "main": "argparse.js",
+ "files": [
+ "argparse.js",
+ "lib/"
+ ],
+ "license": "Python-2.0",
+ "repository": "nodeca/argparse",
+ "scripts": {
+ "lint": "eslint .",
+ "test": "npm run lint && nyc mocha",
+ "coverage": "npm run test && nyc report --reporter html"
+ },
+ "devDependencies": {
+ "@babel/eslint-parser": "^7.11.0",
+ "@babel/plugin-syntax-class-properties": "^7.10.4",
+ "eslint": "^7.5.0",
+ "mocha": "^8.0.1",
+ "nyc": "^15.1.0"
+ }
+}
diff --git a/node_modules/balanced-match/.github/FUNDING.yml b/node_modules/balanced-match/.github/FUNDING.yml
new file mode 100644
index 0000000..cea8b16
--- /dev/null
+++ b/node_modules/balanced-match/.github/FUNDING.yml
@@ -0,0 +1,2 @@
+tidelift: "npm/balanced-match"
+patreon: juliangruber
diff --git a/node_modules/balanced-match/LICENSE.md b/node_modules/balanced-match/LICENSE.md
new file mode 100644
index 0000000..2cdc8e4
--- /dev/null
+++ b/node_modules/balanced-match/LICENSE.md
@@ -0,0 +1,21 @@
+(MIT)
+
+Copyright (c) 2013 Julian Gruber <julian@juliangruber.com>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/balanced-match/README.md b/node_modules/balanced-match/README.md
new file mode 100644
index 0000000..d2a48b6
--- /dev/null
+++ b/node_modules/balanced-match/README.md
@@ -0,0 +1,97 @@
+# balanced-match
+
+Match balanced string pairs, like `{` and `}` or `` and ``. Supports regular expressions as well!
+
+[](http://travis-ci.org/juliangruber/balanced-match)
+[](https://www.npmjs.org/package/balanced-match)
+
+[](https://ci.testling.com/juliangruber/balanced-match)
+
+## Example
+
+Get the first matching pair of braces:
+
+```js
+var balanced = require('balanced-match');
+
+console.log(balanced('{', '}', 'pre{in{nested}}post'));
+console.log(balanced('{', '}', 'pre{first}between{second}post'));
+console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post'));
+```
+
+The matches are:
+
+```bash
+$ node example.js
+{ start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' }
+{ start: 3,
+ end: 9,
+ pre: 'pre',
+ body: 'first',
+ post: 'between{second}post' }
+{ start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' }
+```
+
+## API
+
+### var m = balanced(a, b, str)
+
+For the first non-nested matching pair of `a` and `b` in `str`, return an
+object with those keys:
+
+* **start** the index of the first match of `a`
+* **end** the index of the matching `b`
+* **pre** the preamble, `a` and `b` not included
+* **body** the match, `a` and `b` not included
+* **post** the postscript, `a` and `b` not included
+
+If there's no match, `undefined` will be returned.
+
+If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `['{', 'a', '']` and `{a}}` will match `['', 'a', '}']`.
+
+### var r = balanced.range(a, b, str)
+
+For the first non-nested matching pair of `a` and `b` in `str`, return an
+array with indexes: `[ , ]`.
+
+If there's no match, `undefined` will be returned.
+
+If the `str` contains more `a` than `b` / there are unmatched pairs, the first match that was closed will be used. For example, `{{a}` will match `[ 1, 3 ]` and `{a}}` will match `[0, 2]`.
+
+## Installation
+
+With [npm](https://npmjs.org) do:
+
+```bash
+npm install balanced-match
+```
+
+## Security contact information
+
+To report a security vulnerability, please use the
+[Tidelift security contact](https://tidelift.com/security).
+Tidelift will coordinate the fix and disclosure.
+
+## License
+
+(MIT)
+
+Copyright (c) 2013 Julian Gruber <julian@juliangruber.com>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/balanced-match/index.js b/node_modules/balanced-match/index.js
new file mode 100644
index 0000000..c67a646
--- /dev/null
+++ b/node_modules/balanced-match/index.js
@@ -0,0 +1,62 @@
+'use strict';
+module.exports = balanced;
+function balanced(a, b, str) {
+ if (a instanceof RegExp) a = maybeMatch(a, str);
+ if (b instanceof RegExp) b = maybeMatch(b, str);
+
+ var r = range(a, b, str);
+
+ return r && {
+ start: r[0],
+ end: r[1],
+ pre: str.slice(0, r[0]),
+ body: str.slice(r[0] + a.length, r[1]),
+ post: str.slice(r[1] + b.length)
+ };
+}
+
+function maybeMatch(reg, str) {
+ var m = str.match(reg);
+ return m ? m[0] : null;
+}
+
+balanced.range = range;
+function range(a, b, str) {
+ var begs, beg, left, right, result;
+ var ai = str.indexOf(a);
+ var bi = str.indexOf(b, ai + 1);
+ var i = ai;
+
+ if (ai >= 0 && bi > 0) {
+ if(a===b) {
+ return [ai, bi];
+ }
+ begs = [];
+ left = str.length;
+
+ while (i >= 0 && !result) {
+ if (i == ai) {
+ begs.push(i);
+ ai = str.indexOf(a, i + 1);
+ } else if (begs.length == 1) {
+ result = [ begs.pop(), bi ];
+ } else {
+ beg = begs.pop();
+ if (beg < left) {
+ left = beg;
+ right = bi;
+ }
+
+ bi = str.indexOf(b, i + 1);
+ }
+
+ i = ai < bi && ai >= 0 ? ai : bi;
+ }
+
+ if (begs.length) {
+ result = [ left, right ];
+ }
+ }
+
+ return result;
+}
diff --git a/node_modules/balanced-match/package.json b/node_modules/balanced-match/package.json
new file mode 100644
index 0000000..ce6073e
--- /dev/null
+++ b/node_modules/balanced-match/package.json
@@ -0,0 +1,48 @@
+{
+ "name": "balanced-match",
+ "description": "Match balanced character pairs, like \"{\" and \"}\"",
+ "version": "1.0.2",
+ "repository": {
+ "type": "git",
+ "url": "git://github.com/juliangruber/balanced-match.git"
+ },
+ "homepage": "https://github.com/juliangruber/balanced-match",
+ "main": "index.js",
+ "scripts": {
+ "test": "tape test/test.js",
+ "bench": "matcha test/bench.js"
+ },
+ "devDependencies": {
+ "matcha": "^0.7.0",
+ "tape": "^4.6.0"
+ },
+ "keywords": [
+ "match",
+ "regexp",
+ "test",
+ "balanced",
+ "parse"
+ ],
+ "author": {
+ "name": "Julian Gruber",
+ "email": "mail@juliangruber.com",
+ "url": "http://juliangruber.com"
+ },
+ "license": "MIT",
+ "testling": {
+ "files": "test/*.js",
+ "browsers": [
+ "ie/8..latest",
+ "firefox/20..latest",
+ "firefox/nightly",
+ "chrome/25..latest",
+ "chrome/canary",
+ "opera/12..latest",
+ "opera/next",
+ "safari/5.1..latest",
+ "ipad/6.0..latest",
+ "iphone/6.0..latest",
+ "android-browser/4.2..latest"
+ ]
+ }
+}
diff --git a/node_modules/brace-expansion/.github/FUNDING.yml b/node_modules/brace-expansion/.github/FUNDING.yml
new file mode 100644
index 0000000..79d1eaf
--- /dev/null
+++ b/node_modules/brace-expansion/.github/FUNDING.yml
@@ -0,0 +1,2 @@
+tidelift: "npm/brace-expansion"
+patreon: juliangruber
diff --git a/node_modules/brace-expansion/LICENSE b/node_modules/brace-expansion/LICENSE
new file mode 100644
index 0000000..de32266
--- /dev/null
+++ b/node_modules/brace-expansion/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2013 Julian Gruber
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/brace-expansion/README.md b/node_modules/brace-expansion/README.md
new file mode 100644
index 0000000..e55c583
--- /dev/null
+++ b/node_modules/brace-expansion/README.md
@@ -0,0 +1,135 @@
+# brace-expansion
+
+[Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html),
+as known from sh/bash, in JavaScript.
+
+[](http://travis-ci.org/juliangruber/brace-expansion)
+[](https://www.npmjs.org/package/brace-expansion)
+[](https://greenkeeper.io/)
+
+[](https://ci.testling.com/juliangruber/brace-expansion)
+
+## Example
+
+```js
+var expand = require('brace-expansion');
+
+expand('file-{a,b,c}.jpg')
+// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
+
+expand('-v{,,}')
+// => ['-v', '-v', '-v']
+
+expand('file{0..2}.jpg')
+// => ['file0.jpg', 'file1.jpg', 'file2.jpg']
+
+expand('file-{a..c}.jpg')
+// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
+
+expand('file{2..0}.jpg')
+// => ['file2.jpg', 'file1.jpg', 'file0.jpg']
+
+expand('file{0..4..2}.jpg')
+// => ['file0.jpg', 'file2.jpg', 'file4.jpg']
+
+expand('file-{a..e..2}.jpg')
+// => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg']
+
+expand('file{00..10..5}.jpg')
+// => ['file00.jpg', 'file05.jpg', 'file10.jpg']
+
+expand('{{A..C},{a..c}}')
+// => ['A', 'B', 'C', 'a', 'b', 'c']
+
+expand('ppp{,config,oe{,conf}}')
+// => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf']
+```
+
+## API
+
+```js
+var expand = require('brace-expansion');
+```
+
+### var expanded = expand(str)
+
+Return an array of all possible and valid expansions of `str`. If none are
+found, `[str]` is returned.
+
+Valid expansions are:
+
+```js
+/^(.*,)+(.+)?$/
+// {a,b,...}
+```
+
+A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`.
+
+```js
+/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
+// {x..y[..incr]}
+```
+
+A numeric sequence from `x` to `y` inclusive, with optional increment.
+If `x` or `y` start with a leading `0`, all the numbers will be padded
+to have equal length. Negative numbers and backwards iteration work too.
+
+```js
+/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
+// {x..y[..incr]}
+```
+
+An alphabetic sequence from `x` to `y` inclusive, with optional increment.
+`x` and `y` must be exactly one character, and if given, `incr` must be a
+number.
+
+For compatibility reasons, the string `${` is not eligible for brace expansion.
+
+## Installation
+
+With [npm](https://npmjs.org) do:
+
+```bash
+npm install brace-expansion
+```
+
+## Contributors
+
+- [Julian Gruber](https://github.com/juliangruber)
+- [Isaac Z. Schlueter](https://github.com/isaacs)
+
+## Sponsors
+
+This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)!
+
+Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)!
+
+## Security contact information
+
+To report a security vulnerability, please use the
+[Tidelift security contact](https://tidelift.com/security).
+Tidelift will coordinate the fix and disclosure.
+
+## License
+
+(MIT)
+
+Copyright (c) 2013 Julian Gruber <julian@juliangruber.com>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/node_modules/brace-expansion/index.js b/node_modules/brace-expansion/index.js
new file mode 100644
index 0000000..4af9dde
--- /dev/null
+++ b/node_modules/brace-expansion/index.js
@@ -0,0 +1,203 @@
+var balanced = require('balanced-match');
+
+module.exports = expandTop;
+
+var escSlash = '\0SLASH'+Math.random()+'\0';
+var escOpen = '\0OPEN'+Math.random()+'\0';
+var escClose = '\0CLOSE'+Math.random()+'\0';
+var escComma = '\0COMMA'+Math.random()+'\0';
+var escPeriod = '\0PERIOD'+Math.random()+'\0';
+
+function numeric(str) {
+ return parseInt(str, 10) == str
+ ? parseInt(str, 10)
+ : str.charCodeAt(0);
+}
+
+function escapeBraces(str) {
+ return str.split('\\\\').join(escSlash)
+ .split('\\{').join(escOpen)
+ .split('\\}').join(escClose)
+ .split('\\,').join(escComma)
+ .split('\\.').join(escPeriod);
+}
+
+function unescapeBraces(str) {
+ return str.split(escSlash).join('\\')
+ .split(escOpen).join('{')
+ .split(escClose).join('}')
+ .split(escComma).join(',')
+ .split(escPeriod).join('.');
+}
+
+
+// Basically just str.split(","), but handling cases
+// where we have nested braced sections, which should be
+// treated as individual members, like {a,{b,c},d}
+function parseCommaParts(str) {
+ if (!str)
+ return [''];
+
+ var parts = [];
+ var m = balanced('{', '}', str);
+
+ if (!m)
+ return str.split(',');
+
+ var pre = m.pre;
+ var body = m.body;
+ var post = m.post;
+ var p = pre.split(',');
+
+ p[p.length-1] += '{' + body + '}';
+ var postParts = parseCommaParts(post);
+ if (post.length) {
+ p[p.length-1] += postParts.shift();
+ p.push.apply(p, postParts);
+ }
+
+ parts.push.apply(parts, p);
+
+ return parts;
+}
+
+function expandTop(str) {
+ if (!str)
+ return [];
+
+ // I don't know why Bash 4.3 does this, but it does.
+ // Anything starting with {} will have the first two bytes preserved
+ // but *only* at the top level, so {},a}b will not expand to anything,
+ // but a{},b}c will be expanded to [a}c,abc].
+ // One could argue that this is a bug in Bash, but since the goal of
+ // this module is to match Bash's rules, we escape a leading {}
+ if (str.substr(0, 2) === '{}') {
+ str = '\\{\\}' + str.substr(2);
+ }
+
+ return expand(escapeBraces(str), true).map(unescapeBraces);
+}
+
+function embrace(str) {
+ return '{' + str + '}';
+}
+function isPadded(el) {
+ return /^-?0\d/.test(el);
+}
+
+function lte(i, y) {
+ return i <= y;
+}
+function gte(i, y) {
+ return i >= y;
+}
+
+function expand(str, isTop) {
+ var expansions = [];
+
+ var m = balanced('{', '}', str);
+ if (!m) return [str];
+
+ // no need to expand pre, since it is guaranteed to be free of brace-sets
+ var pre = m.pre;
+ var post = m.post.length
+ ? expand(m.post, false)
+ : [''];
+
+ if (/\$$/.test(m.pre)) {
+ for (var k = 0; k < post.length; k++) {
+ var expansion = pre+ '{' + m.body + '}' + post[k];
+ expansions.push(expansion);
+ }
+ } else {
+ var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body);
+ var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body);
+ var isSequence = isNumericSequence || isAlphaSequence;
+ var isOptions = m.body.indexOf(',') >= 0;
+ if (!isSequence && !isOptions) {
+ // {a},b}
+ if (m.post.match(/,.*\}/)) {
+ str = m.pre + '{' + m.body + escClose + m.post;
+ return expand(str);
+ }
+ return [str];
+ }
+
+ var n;
+ if (isSequence) {
+ n = m.body.split(/\.\./);
+ } else {
+ n = parseCommaParts(m.body);
+ if (n.length === 1) {
+ // x{{a,b}}y ==> x{a}y x{b}y
+ n = expand(n[0], false).map(embrace);
+ if (n.length === 1) {
+ return post.map(function(p) {
+ return m.pre + n[0] + p;
+ });
+ }
+ }
+ }
+
+ // at this point, n is the parts, and we know it's not a comma set
+ // with a single entry.
+ var N;
+
+ if (isSequence) {
+ var x = numeric(n[0]);
+ var y = numeric(n[1]);
+ var width = Math.max(n[0].length, n[1].length)
+ var incr = n.length == 3
+ ? Math.abs(numeric(n[2]))
+ : 1;
+ var test = lte;
+ var reverse = y < x;
+ if (reverse) {
+ incr *= -1;
+ test = gte;
+ }
+ var pad = n.some(isPadded);
+
+ N = [];
+
+ for (var i = x; test(i, y); i += incr) {
+ var c;
+ if (isAlphaSequence) {
+ c = String.fromCharCode(i);
+ if (c === '\\')
+ c = '';
+ } else {
+ c = String(i);
+ if (pad) {
+ var need = width - c.length;
+ if (need > 0) {
+ var z = new Array(need + 1).join('0');
+ if (i < 0)
+ c = '-' + z + c.slice(1);
+ else
+ c = z + c;
+ }
+ }
+ }
+ N.push(c);
+ }
+ } else {
+ N = [];
+
+ for (var j = 0; j < n.length; j++) {
+ N.push.apply(N, expand(n[j], false));
+ }
+ }
+
+ for (var j = 0; j < N.length; j++) {
+ for (var k = 0; k < post.length; k++) {
+ var expansion = pre + N[j] + post[k];
+ if (!isTop || isSequence || expansion)
+ expansions.push(expansion);
+ }
+ }
+ }
+
+ return expansions;
+}
+
diff --git a/node_modules/brace-expansion/package.json b/node_modules/brace-expansion/package.json
new file mode 100644
index 0000000..7097d41
--- /dev/null
+++ b/node_modules/brace-expansion/package.json
@@ -0,0 +1,46 @@
+{
+ "name": "brace-expansion",
+ "description": "Brace expansion as known from sh/bash",
+ "version": "2.0.1",
+ "repository": {
+ "type": "git",
+ "url": "git://github.com/juliangruber/brace-expansion.git"
+ },
+ "homepage": "https://github.com/juliangruber/brace-expansion",
+ "main": "index.js",
+ "scripts": {
+ "test": "tape test/*.js",
+ "gentest": "bash test/generate.sh",
+ "bench": "matcha test/perf/bench.js"
+ },
+ "dependencies": {
+ "balanced-match": "^1.0.0"
+ },
+ "devDependencies": {
+ "@c4312/matcha": "^1.3.1",
+ "tape": "^4.6.0"
+ },
+ "keywords": [],
+ "author": {
+ "name": "Julian Gruber",
+ "email": "mail@juliangruber.com",
+ "url": "http://juliangruber.com"
+ },
+ "license": "MIT",
+ "testling": {
+ "files": "test/*.js",
+ "browsers": [
+ "ie/8..latest",
+ "firefox/20..latest",
+ "firefox/nightly",
+ "chrome/25..latest",
+ "chrome/canary",
+ "opera/12..latest",
+ "opera/next",
+ "safari/5.1..latest",
+ "ipad/6.0..latest",
+ "iphone/6.0..latest",
+ "android-browser/4.2..latest"
+ ]
+ }
+}
diff --git a/node_modules/entities/LICENSE b/node_modules/entities/LICENSE
new file mode 100644
index 0000000..c464f86
--- /dev/null
+++ b/node_modules/entities/LICENSE
@@ -0,0 +1,11 @@
+Copyright (c) Felix Böhm
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
+
+Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
+
+Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+
+THIS IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS,
+EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/node_modules/entities/package.json b/node_modules/entities/package.json
new file mode 100644
index 0000000..2e857f8
--- /dev/null
+++ b/node_modules/entities/package.json
@@ -0,0 +1,90 @@
+{
+ "name": "entities",
+ "version": "4.5.0",
+ "description": "Encode & decode XML and HTML entities with ease & speed",
+ "author": "Felix Boehm ",
+ "funding": "https://github.com/fb55/entities?sponsor=1",
+ "sideEffects": false,
+ "keywords": [
+ "entity",
+ "decoding",
+ "encoding",
+ "html",
+ "xml",
+ "html entities"
+ ],
+ "directories": {
+ "lib": "lib/"
+ },
+ "main": "lib/index.js",
+ "types": "lib/index.d.ts",
+ "module": "lib/esm/index.js",
+ "exports": {
+ ".": {
+ "require": "./lib/index.js",
+ "import": "./lib/esm/index.js"
+ },
+ "./lib/decode.js": {
+ "require": "./lib/decode.js",
+ "import": "./lib/esm/decode.js"
+ },
+ "./lib/escape.js": {
+ "require": "./lib/escape.js",
+ "import": "./lib/esm/escape.js"
+ }
+ },
+ "files": [
+ "lib/**/*"
+ ],
+ "engines": {
+ "node": ">=0.12"
+ },
+ "devDependencies": {
+ "@types/jest": "^28.1.8",
+ "@types/node": "^18.15.11",
+ "@typescript-eslint/eslint-plugin": "^5.58.0",
+ "@typescript-eslint/parser": "^5.58.0",
+ "eslint": "^8.38.0",
+ "eslint-config-prettier": "^8.8.0",
+ "eslint-plugin-node": "^11.1.0",
+ "jest": "^28.1.3",
+ "prettier": "^2.8.7",
+ "ts-jest": "^28.0.8",
+ "typedoc": "^0.24.1",
+ "typescript": "^5.0.4"
+ },
+ "scripts": {
+ "test": "npm run test:jest && npm run lint",
+ "test:jest": "jest",
+ "lint": "npm run lint:es && npm run lint:prettier",
+ "lint:es": "eslint .",
+ "lint:prettier": "npm run prettier -- --check",
+ "format": "npm run format:es && npm run format:prettier",
+ "format:es": "npm run lint:es -- --fix",
+ "format:prettier": "npm run prettier -- --write",
+ "prettier": "prettier '**/*.{ts,md,json,yml}'",
+ "build": "npm run build:cjs && npm run build:esm",
+ "build:cjs": "tsc --sourceRoot https://raw.githubusercontent.com/fb55/entities/$(git rev-parse HEAD)/src/",
+ "build:esm": "npm run build:cjs -- --module esnext --target es2019 --outDir lib/esm && echo '{\"type\":\"module\"}' > lib/esm/package.json",
+ "build:docs": "typedoc --hideGenerator src/index.ts",
+ "build:trie": "ts-node scripts/write-decode-map.ts",
+ "build:encode-trie": "ts-node scripts/write-encode-map.ts",
+ "prepare": "npm run build"
+ },
+ "repository": {
+ "type": "git",
+ "url": "git://github.com/fb55/entities.git"
+ },
+ "license": "BSD-2-Clause",
+ "jest": {
+ "preset": "ts-jest",
+ "coverageProvider": "v8",
+ "moduleNameMapper": {
+ "^(.*)\\.js$": "$1"
+ }
+ },
+ "prettier": {
+ "tabWidth": 4,
+ "proseWrap": "always"
+ }
+}
diff --git a/node_modules/entities/readme.md b/node_modules/entities/readme.md
new file mode 100644
index 0000000..731d90c
--- /dev/null
+++ b/node_modules/entities/readme.md
@@ -0,0 +1,122 @@
+# entities [](https://npmjs.org/package/entities) [](https://npmjs.org/package/entities) [](https://github.com/fb55/entities/actions/workflows/nodejs-test.yml)
+
+Encode & decode HTML & XML entities with ease & speed.
+
+## Features
+
+- 😇 Tried and true: `entities` is used by many popular libraries; eg.
+ [`htmlparser2`](https://github.com/fb55/htmlparser2), the official
+ [AWS SDK](https://github.com/aws/aws-sdk-js-v3) and
+ [`commonmark`](https://github.com/commonmark/commonmark.js) use it to
+ process HTML entities.
+- ⚡️ Fast: `entities` is the fastest library for decoding HTML entities (as
+ of April 2022); see [performance](#performance).
+- 🎛 Configurable: Get an output tailored for your needs. You are fine with
+ UTF8? That'll save you some bytes. Prefer to only have ASCII characters? We
+ can do that as well!
+
+## How to…
+
+### …install `entities`
+
+ npm install entities
+
+### …use `entities`
+
+```javascript
+const entities = require("entities");
+
+// Encoding
+entities.escapeUTF8("& ü"); // "& ü"
+entities.encodeXML("& ü"); // "& ü"
+entities.encodeHTML("& ü"); // "&#38; ü"
+
+// Decoding
+entities.decodeXML("asdf & ÿ ü '"); // "asdf & ÿ ü '"
+entities.decodeHTML("asdf & ÿ ü '"); // "asdf & ÿ ü '"
+```
+
+## Performance
+
+This is how `entities` compares to other libraries on a very basic benchmark
+(see `scripts/benchmark.ts`, for 10,000,000 iterations; **lower is better**):
+
+| Library | Version | `decode` perf | `encode` perf | `escape` perf |
+| -------------- | ------- | ------------- | ------------- | ------------- |
+| entities | `3.0.1` | 1.418s | 6.786s | 2.196s |
+| html-entities | `2.3.2` | 2.530s | 6.829s | 2.415s |
+| he | `1.2.0` | 5.800s | 24.237s | 3.624s |
+| parse-entities | `3.0.0` | 9.660s | N/A | N/A |
+
+---
+
+## FAQ
+
+> What methods should I actually use to encode my documents?
+
+If your target supports UTF-8, the `escapeUTF8` method is going to be your best
+choice. Otherwise, use either `encodeHTML` or `encodeXML` based on whether
+you're dealing with an HTML or an XML document.
+
+You can have a look at the options for the `encode` and `decode` methods to see
+everything you can configure.
+
+> When should I use strict decoding?
+
+When strict decoding, entities not terminated with a semicolon will be ignored.
+This is helpful for decoding entities in legacy environments.
+
+> Why should I use `entities` instead of alternative modules?
+
+As of April 2022, `entities` is a bit faster than other modules. Still, this is
+not a very differentiated space and other modules can catch up.
+
+**More importantly**, you might already have `entities` in your dependency graph
+(as a dependency of eg. `cheerio`, or `htmlparser2`), and including it directly
+might not even increase your bundle size. The same is true for other entity
+libraries, so have a look through your `node_modules` directory!
+
+> Does `entities` support tree shaking?
+
+Yes! `entities` ships as both a CommonJS and a ES module. Note that for best
+results, you should not use the `encode` and `decode` functions, as they wrap
+around a number of other functions, all of which will remain in the bundle.
+Instead, use the functions that you need directly.
+
+---
+
+## Acknowledgements
+
+This library wouldn't be possible without the work of these individuals. Thanks
+to
+
+- [@mathiasbynens](https://github.com/mathiasbynens) for his explanations
+ about character encodings, and his library `he`, which was one of the
+ inspirations for `entities`
+- [@inikulin](https://github.com/inikulin) for his work on optimized tries for
+ decoding HTML entities for the `parse5` project
+- [@mdevils](https://github.com/mdevils) for taking on the challenge of
+ producing a quick entity library with his `html-entities` library.
+ `entities` would be quite a bit slower if there wasn't any competition.
+ Right now `entities` is on top, but we'll see how long that lasts!
+
+---
+
+License: BSD-2-Clause
+
+## Security contact information
+
+To report a security vulnerability, please use the
+[Tidelift security contact](https://tidelift.com/security). Tidelift will
+coordinate the fix and disclosure.
+
+## `entities` for enterprise
+
+Available as part of the Tidelift Subscription
+
+The maintainers of `entities` and thousands of other packages are working with
+Tidelift to deliver commercial support and maintenance for the open source
+dependencies you use to build your applications. Save time, reduce risk, and
+improve code health, while paying the maintainers of the exact dependencies you
+use.
+[Learn more.](https://tidelift.com/subscription/pkg/npm-entities?utm_source=npm-entities&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
diff --git a/node_modules/linkify-it/LICENSE b/node_modules/linkify-it/LICENSE
new file mode 100644
index 0000000..67596f5
--- /dev/null
+++ b/node_modules/linkify-it/LICENSE
@@ -0,0 +1,22 @@
+Copyright (c) 2015 Vitaly Puzrin.
+
+Permission is hereby granted, free of charge, to any person
+obtaining a copy of this software and associated documentation
+files (the "Software"), to deal in the Software without
+restriction, including without limitation the rights to use,
+copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the
+Software is furnished to do so, subject to the following
+conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
diff --git a/node_modules/linkify-it/README.md b/node_modules/linkify-it/README.md
new file mode 100644
index 0000000..7c2d9fb
--- /dev/null
+++ b/node_modules/linkify-it/README.md
@@ -0,0 +1,196 @@
+linkify-it
+==========
+
+[](https://github.com/markdown-it/linkify-it/actions/workflows/ci.yml)
+[](https://www.npmjs.org/package/linkify-it)
+[](https://coveralls.io/r/markdown-it/linkify-it?branch=master)
+[](https://gitter.im/markdown-it/linkify-it)
+
+> Links recognition library with FULL unicode support.
+> Focused on high quality link patterns detection in plain text.
+
+__[Demo](http://markdown-it.github.io/linkify-it/)__
+
+Why it's awesome:
+
+- Full unicode support, _with astral characters_!
+- International domains support.
+- Allows rules extension & custom normalizers.
+
+
+Install
+-------
+
+```bash
+npm install linkify-it --save
+```
+
+Browserification is also supported.
+
+
+Usage examples
+--------------
+
+##### Example 1
+
+```js
+import linkifyit from 'linkify-it';
+const linkify = linkifyit();
+
+// Reload full tlds list & add unofficial `.onion` domain.
+linkify
+ .tlds(require('tlds')) // Reload with full tlds list
+ .tlds('onion', true) // Add unofficial `.onion` domain
+ .add('git:', 'http:') // Add `git:` protocol as "alias"
+ .add('ftp:', null) // Disable `ftp:` protocol
+ .set({ fuzzyIP: true }); // Enable IPs in fuzzy links (without schema)
+
+console.log(linkify.test('Site github.com!')); // true
+
+console.log(linkify.match('Site github.com!')); // [ {
+ // schema: "",
+ // index: 5,
+ // lastIndex: 15,
+ // raw: "github.com",
+ // text: "github.com",
+ // url: "http://github.com",
+ // } ]
+```
+
+##### Example 2. Add twitter mentions handler
+
+```js
+linkify.add('@', {
+ validate: function (text, pos, self) {
+ const tail = text.slice(pos);
+
+ if (!self.re.twitter) {
+ self.re.twitter = new RegExp(
+ '^([a-zA-Z0-9_]){1,15}(?!_)(?=$|' + self.re.src_ZPCc + ')'
+ );
+ }
+ if (self.re.twitter.test(tail)) {
+ // Linkifier allows punctuation chars before prefix,
+ // but we additionally disable `@` ("@@mention" is invalid)
+ if (pos >= 2 && tail[pos - 2] === '@') {
+ return false;
+ }
+ return tail.match(self.re.twitter)[0].length;
+ }
+ return 0;
+ },
+ normalize: function (match) {
+ match.url = 'https://twitter.com/' + match.url.replace(/^@/, '');
+ }
+});
+```
+
+
+API
+---
+
+__[API documentation](http://markdown-it.github.io/linkify-it/doc)__
+
+### new LinkifyIt(schemas, options)
+
+Creates new linkifier instance with optional additional schemas.
+Can be called without `new` keyword for convenience.
+
+By default understands:
+
+- `http(s)://...` , `ftp://...`, `mailto:...` & `//...` links
+- "fuzzy" links and emails (google.com, foo@bar.com).
+
+`schemas` is an object, where each key/value describes protocol/rule:
+
+- __key__ - link prefix (usually, protocol name with `:` at the end, `skype:`
+ for example). `linkify-it` makes sure that prefix is not preceded with
+ alphanumeric char.
+- __value__ - rule to check tail after link prefix
+ - _String_ - just alias to existing rule
+ - _Object_
+ - _validate_ - either a `RegExp` (start with `^`, and don't include the
+ link prefix itself), or a validator function which, given arguments
+ _text_, _pos_, and _self_, returns the length of a match in _text_
+ starting at index _pos_. _pos_ is the index right after the link prefix.
+ _self_ can be used to access the linkify object to cache data.
+ - _normalize_ - optional function to normalize text & url of matched result
+ (for example, for twitter mentions).
+
+`options`:
+
+- __fuzzyLink__ - recognize URL-s without `http(s)://` head. Default `true`.
+- __fuzzyIP__ - allow IPs in fuzzy links above. Can conflict with some texts
+ like version numbers. Default `false`.
+- __fuzzyEmail__ - recognize emails without `mailto:` prefix. Default `true`.
+- __---__ - set `true` to terminate link with `---` (if it's considered as long dash).
+
+
+### .test(text)
+
+Searches linkifiable pattern and returns `true` on success or `false` on fail.
+
+
+### .pretest(text)
+
+Quick check if link MAY BE can exist. Can be used to optimize more expensive
+`.test()` calls. Return `false` if link can not be found, `true` - if `.test()`
+call needed to know exactly.
+
+
+### .testSchemaAt(text, name, offset)
+
+Similar to `.test()` but checks only specific protocol tail exactly at given
+position. Returns length of found pattern (0 on fail).
+
+
+### .match(text)
+
+Returns `Array` of found link matches or null if nothing found.
+
+Each match has:
+
+- __schema__ - link schema, can be empty for fuzzy links, or `//` for
+ protocol-neutral links.
+- __index__ - offset of matched text
+- __lastIndex__ - index of next char after mathch end
+- __raw__ - matched text
+- __text__ - normalized text
+- __url__ - link, generated from matched text
+
+
+### .matchAtStart(text)
+
+Checks if a match exists at the start of the string. Returns `Match`
+(see docs for `match(text)`) or null if no URL is at the start.
+Doesn't work with fuzzy links.
+
+
+### .tlds(list[, keepOld])
+
+Load (or merge) new tlds list. Those are needed for fuzzy links (without schema)
+to avoid false positives. By default:
+
+- 2-letter root zones are ok.
+- biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф are ok.
+- encoded (`xn--...`) root zones are ok.
+
+If that's not enough, you can reload defaults with more detailed zones list.
+
+### .add(key, value)
+
+Add a new schema to the schemas object. As described in the constructor
+definition, `key` is a link prefix (`skype:`, for example), and `value`
+is a String to alias to another schema, or an Object with `validate` and
+optionally `normalize` definitions. To disable an existing rule, use
+`.add(key, null)`.
+
+
+### .set(options)
+
+Override default options. Missed properties will not be changed.
+
+
+## License
+
+[MIT](https://github.com/markdown-it/linkify-it/blob/master/LICENSE)
diff --git a/node_modules/linkify-it/index.mjs b/node_modules/linkify-it/index.mjs
new file mode 100644
index 0000000..f4c8e13
--- /dev/null
+++ b/node_modules/linkify-it/index.mjs
@@ -0,0 +1,642 @@
+import reFactory from './lib/re.mjs'
+
+//
+// Helpers
+//
+
+// Merge objects
+//
+function assign (obj /* from1, from2, from3, ... */) {
+ const sources = Array.prototype.slice.call(arguments, 1)
+
+ sources.forEach(function (source) {
+ if (!source) { return }
+
+ Object.keys(source).forEach(function (key) {
+ obj[key] = source[key]
+ })
+ })
+
+ return obj
+}
+
+function _class (obj) { return Object.prototype.toString.call(obj) }
+function isString (obj) { return _class(obj) === '[object String]' }
+function isObject (obj) { return _class(obj) === '[object Object]' }
+function isRegExp (obj) { return _class(obj) === '[object RegExp]' }
+function isFunction (obj) { return _class(obj) === '[object Function]' }
+
+function escapeRE (str) { return str.replace(/[.?*+^$[\]\\(){}|-]/g, '\\$&') }
+
+//
+
+const defaultOptions = {
+ fuzzyLink: true,
+ fuzzyEmail: true,
+ fuzzyIP: false
+}
+
+function isOptionsObj (obj) {
+ return Object.keys(obj || {}).reduce(function (acc, k) {
+ /* eslint-disable-next-line no-prototype-builtins */
+ return acc || defaultOptions.hasOwnProperty(k)
+ }, false)
+}
+
+const defaultSchemas = {
+ 'http:': {
+ validate: function (text, pos, self) {
+ const tail = text.slice(pos)
+
+ if (!self.re.http) {
+ // compile lazily, because "host"-containing variables can change on tlds update.
+ self.re.http = new RegExp(
+ '^\\/\\/' + self.re.src_auth + self.re.src_host_port_strict + self.re.src_path, 'i'
+ )
+ }
+ if (self.re.http.test(tail)) {
+ return tail.match(self.re.http)[0].length
+ }
+ return 0
+ }
+ },
+ 'https:': 'http:',
+ 'ftp:': 'http:',
+ '//': {
+ validate: function (text, pos, self) {
+ const tail = text.slice(pos)
+
+ if (!self.re.no_http) {
+ // compile lazily, because "host"-containing variables can change on tlds update.
+ self.re.no_http = new RegExp(
+ '^' +
+ self.re.src_auth +
+ // Don't allow single-level domains, because of false positives like '//test'
+ // with code comments
+ '(?:localhost|(?:(?:' + self.re.src_domain + ')\\.)+' + self.re.src_domain_root + ')' +
+ self.re.src_port +
+ self.re.src_host_terminator +
+ self.re.src_path,
+
+ 'i'
+ )
+ }
+
+ if (self.re.no_http.test(tail)) {
+ // should not be `://` & `///`, that protects from errors in protocol name
+ if (pos >= 3 && text[pos - 3] === ':') { return 0 }
+ if (pos >= 3 && text[pos - 3] === '/') { return 0 }
+ return tail.match(self.re.no_http)[0].length
+ }
+ return 0
+ }
+ },
+ 'mailto:': {
+ validate: function (text, pos, self) {
+ const tail = text.slice(pos)
+
+ if (!self.re.mailto) {
+ self.re.mailto = new RegExp(
+ '^' + self.re.src_email_name + '@' + self.re.src_host_strict, 'i'
+ )
+ }
+ if (self.re.mailto.test(tail)) {
+ return tail.match(self.re.mailto)[0].length
+ }
+ return 0
+ }
+ }
+}
+
+// RE pattern for 2-character tlds (autogenerated by ./support/tlds_2char_gen.js)
+/* eslint-disable-next-line max-len */
+const tlds_2ch_src_re = 'a[cdefgilmnoqrstuwxz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvwxyz]|d[ejkmoz]|e[cegrstu]|f[ijkmor]|g[abdefghilmnpqrstuwy]|h[kmnrtu]|i[delmnoqrst]|j[emop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdeghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrstwy]|qa|r[eosuw]|s[abcdeghijklmnortuvxyz]|t[cdfghjklmnortvwz]|u[agksyz]|v[aceginu]|w[fs]|y[et]|z[amw]'
+
+// DON'T try to make PRs with changes. Extend TLDs with LinkifyIt.tlds() instead
+const tlds_default = 'biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф'.split('|')
+
+function resetScanCache (self) {
+ self.__index__ = -1
+ self.__text_cache__ = ''
+}
+
+function createValidator (re) {
+ return function (text, pos) {
+ const tail = text.slice(pos)
+
+ if (re.test(tail)) {
+ return tail.match(re)[0].length
+ }
+ return 0
+ }
+}
+
+function createNormalizer () {
+ return function (match, self) {
+ self.normalize(match)
+ }
+}
+
+// Schemas compiler. Build regexps.
+//
+function compile (self) {
+ // Load & clone RE patterns.
+ const re = self.re = reFactory(self.__opts__)
+
+ // Define dynamic patterns
+ const tlds = self.__tlds__.slice()
+
+ self.onCompile()
+
+ if (!self.__tlds_replaced__) {
+ tlds.push(tlds_2ch_src_re)
+ }
+ tlds.push(re.src_xn)
+
+ re.src_tlds = tlds.join('|')
+
+ function untpl (tpl) { return tpl.replace('%TLDS%', re.src_tlds) }
+
+ re.email_fuzzy = RegExp(untpl(re.tpl_email_fuzzy), 'i')
+ re.link_fuzzy = RegExp(untpl(re.tpl_link_fuzzy), 'i')
+ re.link_no_ip_fuzzy = RegExp(untpl(re.tpl_link_no_ip_fuzzy), 'i')
+ re.host_fuzzy_test = RegExp(untpl(re.tpl_host_fuzzy_test), 'i')
+
+ //
+ // Compile each schema
+ //
+
+ const aliases = []
+
+ self.__compiled__ = {} // Reset compiled data
+
+ function schemaError (name, val) {
+ throw new Error('(LinkifyIt) Invalid schema "' + name + '": ' + val)
+ }
+
+ Object.keys(self.__schemas__).forEach(function (name) {
+ const val = self.__schemas__[name]
+
+ // skip disabled methods
+ if (val === null) { return }
+
+ const compiled = { validate: null, link: null }
+
+ self.__compiled__[name] = compiled
+
+ if (isObject(val)) {
+ if (isRegExp(val.validate)) {
+ compiled.validate = createValidator(val.validate)
+ } else if (isFunction(val.validate)) {
+ compiled.validate = val.validate
+ } else {
+ schemaError(name, val)
+ }
+
+ if (isFunction(val.normalize)) {
+ compiled.normalize = val.normalize
+ } else if (!val.normalize) {
+ compiled.normalize = createNormalizer()
+ } else {
+ schemaError(name, val)
+ }
+
+ return
+ }
+
+ if (isString(val)) {
+ aliases.push(name)
+ return
+ }
+
+ schemaError(name, val)
+ })
+
+ //
+ // Compile postponed aliases
+ //
+
+ aliases.forEach(function (alias) {
+ if (!self.__compiled__[self.__schemas__[alias]]) {
+ // Silently fail on missed schemas to avoid errons on disable.
+ // schemaError(alias, self.__schemas__[alias]);
+ return
+ }
+
+ self.__compiled__[alias].validate =
+ self.__compiled__[self.__schemas__[alias]].validate
+ self.__compiled__[alias].normalize =
+ self.__compiled__[self.__schemas__[alias]].normalize
+ })
+
+ //
+ // Fake record for guessed links
+ //
+ self.__compiled__[''] = { validate: null, normalize: createNormalizer() }
+
+ //
+ // Build schema condition
+ //
+ const slist = Object.keys(self.__compiled__)
+ .filter(function (name) {
+ // Filter disabled & fake schemas
+ return name.length > 0 && self.__compiled__[name]
+ })
+ .map(escapeRE)
+ .join('|')
+ // (?!_) cause 1.5x slowdown
+ self.re.schema_test = RegExp('(^|(?!_)(?:[><\uff5c]|' + re.src_ZPCc + '))(' + slist + ')', 'i')
+ self.re.schema_search = RegExp('(^|(?!_)(?:[><\uff5c]|' + re.src_ZPCc + '))(' + slist + ')', 'ig')
+ self.re.schema_at_start = RegExp('^' + self.re.schema_search.source, 'i')
+
+ self.re.pretest = RegExp(
+ '(' + self.re.schema_test.source + ')|(' + self.re.host_fuzzy_test.source + ')|@',
+ 'i'
+ )
+
+ //
+ // Cleanup
+ //
+
+ resetScanCache(self)
+}
+
+/**
+ * class Match
+ *
+ * Match result. Single element of array, returned by [[LinkifyIt#match]]
+ **/
+function Match (self, shift) {
+ const start = self.__index__
+ const end = self.__last_index__
+ const text = self.__text_cache__.slice(start, end)
+
+ /**
+ * Match#schema -> String
+ *
+ * Prefix (protocol) for matched string.
+ **/
+ this.schema = self.__schema__.toLowerCase()
+ /**
+ * Match#index -> Number
+ *
+ * First position of matched string.
+ **/
+ this.index = start + shift
+ /**
+ * Match#lastIndex -> Number
+ *
+ * Next position after matched string.
+ **/
+ this.lastIndex = end + shift
+ /**
+ * Match#raw -> String
+ *
+ * Matched string.
+ **/
+ this.raw = text
+ /**
+ * Match#text -> String
+ *
+ * Notmalized text of matched string.
+ **/
+ this.text = text
+ /**
+ * Match#url -> String
+ *
+ * Normalized url of matched string.
+ **/
+ this.url = text
+}
+
+function createMatch (self, shift) {
+ const match = new Match(self, shift)
+
+ self.__compiled__[match.schema].normalize(match, self)
+
+ return match
+}
+
+/**
+ * class LinkifyIt
+ **/
+
+/**
+ * new LinkifyIt(schemas, options)
+ * - schemas (Object): Optional. Additional schemas to validate (prefix/validator)
+ * - options (Object): { fuzzyLink|fuzzyEmail|fuzzyIP: true|false }
+ *
+ * Creates new linkifier instance with optional additional schemas.
+ * Can be called without `new` keyword for convenience.
+ *
+ * By default understands:
+ *
+ * - `http(s)://...` , `ftp://...`, `mailto:...` & `//...` links
+ * - "fuzzy" links and emails (example.com, foo@bar.com).
+ *
+ * `schemas` is an object, where each key/value describes protocol/rule:
+ *
+ * - __key__ - link prefix (usually, protocol name with `:` at the end, `skype:`
+ * for example). `linkify-it` makes shure that prefix is not preceeded with
+ * alphanumeric char and symbols. Only whitespaces and punctuation allowed.
+ * - __value__ - rule to check tail after link prefix
+ * - _String_ - just alias to existing rule
+ * - _Object_
+ * - _validate_ - validator function (should return matched length on success),
+ * or `RegExp`.
+ * - _normalize_ - optional function to normalize text & url of matched result
+ * (for example, for @twitter mentions).
+ *
+ * `options`:
+ *
+ * - __fuzzyLink__ - recognige URL-s without `http(s):` prefix. Default `true`.
+ * - __fuzzyIP__ - allow IPs in fuzzy links above. Can conflict with some texts
+ * like version numbers. Default `false`.
+ * - __fuzzyEmail__ - recognize emails without `mailto:` prefix.
+ *
+ **/
+function LinkifyIt (schemas, options) {
+ if (!(this instanceof LinkifyIt)) {
+ return new LinkifyIt(schemas, options)
+ }
+
+ if (!options) {
+ if (isOptionsObj(schemas)) {
+ options = schemas
+ schemas = {}
+ }
+ }
+
+ this.__opts__ = assign({}, defaultOptions, options)
+
+ // Cache last tested result. Used to skip repeating steps on next `match` call.
+ this.__index__ = -1
+ this.__last_index__ = -1 // Next scan position
+ this.__schema__ = ''
+ this.__text_cache__ = ''
+
+ this.__schemas__ = assign({}, defaultSchemas, schemas)
+ this.__compiled__ = {}
+
+ this.__tlds__ = tlds_default
+ this.__tlds_replaced__ = false
+
+ this.re = {}
+
+ compile(this)
+}
+
+/** chainable
+ * LinkifyIt#add(schema, definition)
+ * - schema (String): rule name (fixed pattern prefix)
+ * - definition (String|RegExp|Object): schema definition
+ *
+ * Add new rule definition. See constructor description for details.
+ **/
+LinkifyIt.prototype.add = function add (schema, definition) {
+ this.__schemas__[schema] = definition
+ compile(this)
+ return this
+}
+
+/** chainable
+ * LinkifyIt#set(options)
+ * - options (Object): { fuzzyLink|fuzzyEmail|fuzzyIP: true|false }
+ *
+ * Set recognition options for links without schema.
+ **/
+LinkifyIt.prototype.set = function set (options) {
+ this.__opts__ = assign(this.__opts__, options)
+ return this
+}
+
+/**
+ * LinkifyIt#test(text) -> Boolean
+ *
+ * Searches linkifiable pattern and returns `true` on success or `false` on fail.
+ **/
+LinkifyIt.prototype.test = function test (text) {
+ // Reset scan cache
+ this.__text_cache__ = text
+ this.__index__ = -1
+
+ if (!text.length) { return false }
+
+ let m, ml, me, len, shift, next, re, tld_pos, at_pos
+
+ // try to scan for link with schema - that's the most simple rule
+ if (this.re.schema_test.test(text)) {
+ re = this.re.schema_search
+ re.lastIndex = 0
+ while ((m = re.exec(text)) !== null) {
+ len = this.testSchemaAt(text, m[2], re.lastIndex)
+ if (len) {
+ this.__schema__ = m[2]
+ this.__index__ = m.index + m[1].length
+ this.__last_index__ = m.index + m[0].length + len
+ break
+ }
+ }
+ }
+
+ if (this.__opts__.fuzzyLink && this.__compiled__['http:']) {
+ // guess schemaless links
+ tld_pos = text.search(this.re.host_fuzzy_test)
+ if (tld_pos >= 0) {
+ // if tld is located after found link - no need to check fuzzy pattern
+ if (this.__index__ < 0 || tld_pos < this.__index__) {
+ if ((ml = text.match(this.__opts__.fuzzyIP ? this.re.link_fuzzy : this.re.link_no_ip_fuzzy)) !== null) {
+ shift = ml.index + ml[1].length
+
+ if (this.__index__ < 0 || shift < this.__index__) {
+ this.__schema__ = ''
+ this.__index__ = shift
+ this.__last_index__ = ml.index + ml[0].length
+ }
+ }
+ }
+ }
+ }
+
+ if (this.__opts__.fuzzyEmail && this.__compiled__['mailto:']) {
+ // guess schemaless emails
+ at_pos = text.indexOf('@')
+ if (at_pos >= 0) {
+ // We can't skip this check, because this cases are possible:
+ // 192.168.1.1@gmail.com, my.in@example.com
+ if ((me = text.match(this.re.email_fuzzy)) !== null) {
+ shift = me.index + me[1].length
+ next = me.index + me[0].length
+
+ if (this.__index__ < 0 || shift < this.__index__ ||
+ (shift === this.__index__ && next > this.__last_index__)) {
+ this.__schema__ = 'mailto:'
+ this.__index__ = shift
+ this.__last_index__ = next
+ }
+ }
+ }
+ }
+
+ return this.__index__ >= 0
+}
+
+/**
+ * LinkifyIt#pretest(text) -> Boolean
+ *
+ * Very quick check, that can give false positives. Returns true if link MAY BE
+ * can exists. Can be used for speed optimization, when you need to check that
+ * link NOT exists.
+ **/
+LinkifyIt.prototype.pretest = function pretest (text) {
+ return this.re.pretest.test(text)
+}
+
+/**
+ * LinkifyIt#testSchemaAt(text, name, position) -> Number
+ * - text (String): text to scan
+ * - name (String): rule (schema) name
+ * - position (Number): text offset to check from
+ *
+ * Similar to [[LinkifyIt#test]] but checks only specific protocol tail exactly
+ * at given position. Returns length of found pattern (0 on fail).
+ **/
+LinkifyIt.prototype.testSchemaAt = function testSchemaAt (text, schema, pos) {
+ // If not supported schema check requested - terminate
+ if (!this.__compiled__[schema.toLowerCase()]) {
+ return 0
+ }
+ return this.__compiled__[schema.toLowerCase()].validate(text, pos, this)
+}
+
+/**
+ * LinkifyIt#match(text) -> Array|null
+ *
+ * Returns array of found link descriptions or `null` on fail. We strongly
+ * recommend to use [[LinkifyIt#test]] first, for best speed.
+ *
+ * ##### Result match description
+ *
+ * - __schema__ - link schema, can be empty for fuzzy links, or `//` for
+ * protocol-neutral links.
+ * - __index__ - offset of matched text
+ * - __lastIndex__ - index of next char after mathch end
+ * - __raw__ - matched text
+ * - __text__ - normalized text
+ * - __url__ - link, generated from matched text
+ **/
+LinkifyIt.prototype.match = function match (text) {
+ const result = []
+ let shift = 0
+
+ // Try to take previous element from cache, if .test() called before
+ if (this.__index__ >= 0 && this.__text_cache__ === text) {
+ result.push(createMatch(this, shift))
+ shift = this.__last_index__
+ }
+
+ // Cut head if cache was used
+ let tail = shift ? text.slice(shift) : text
+
+ // Scan string until end reached
+ while (this.test(tail)) {
+ result.push(createMatch(this, shift))
+
+ tail = tail.slice(this.__last_index__)
+ shift += this.__last_index__
+ }
+
+ if (result.length) {
+ return result
+ }
+
+ return null
+}
+
+/**
+ * LinkifyIt#matchAtStart(text) -> Match|null
+ *
+ * Returns fully-formed (not fuzzy) link if it starts at the beginning
+ * of the string, and null otherwise.
+ **/
+LinkifyIt.prototype.matchAtStart = function matchAtStart (text) {
+ // Reset scan cache
+ this.__text_cache__ = text
+ this.__index__ = -1
+
+ if (!text.length) return null
+
+ const m = this.re.schema_at_start.exec(text)
+ if (!m) return null
+
+ const len = this.testSchemaAt(text, m[2], m[0].length)
+ if (!len) return null
+
+ this.__schema__ = m[2]
+ this.__index__ = m.index + m[1].length
+ this.__last_index__ = m.index + m[0].length + len
+
+ return createMatch(this, 0)
+}
+
+/** chainable
+ * LinkifyIt#tlds(list [, keepOld]) -> this
+ * - list (Array): list of tlds
+ * - keepOld (Boolean): merge with current list if `true` (`false` by default)
+ *
+ * Load (or merge) new tlds list. Those are user for fuzzy links (without prefix)
+ * to avoid false positives. By default this algorythm used:
+ *
+ * - hostname with any 2-letter root zones are ok.
+ * - biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф
+ * are ok.
+ * - encoded (`xn--...`) root zones are ok.
+ *
+ * If list is replaced, then exact match for 2-chars root zones will be checked.
+ **/
+LinkifyIt.prototype.tlds = function tlds (list, keepOld) {
+ list = Array.isArray(list) ? list : [list]
+
+ if (!keepOld) {
+ this.__tlds__ = list.slice()
+ this.__tlds_replaced__ = true
+ compile(this)
+ return this
+ }
+
+ this.__tlds__ = this.__tlds__.concat(list)
+ .sort()
+ .filter(function (el, idx, arr) {
+ return el !== arr[idx - 1]
+ })
+ .reverse()
+
+ compile(this)
+ return this
+}
+
+/**
+ * LinkifyIt#normalize(match)
+ *
+ * Default normalizer (if schema does not define it's own).
+ **/
+LinkifyIt.prototype.normalize = function normalize (match) {
+ // Do minimal possible changes by default. Need to collect feedback prior
+ // to move forward https://github.com/markdown-it/linkify-it/issues/1
+
+ if (!match.schema) { match.url = 'http://' + match.url }
+
+ if (match.schema === 'mailto:' && !/^mailto:/i.test(match.url)) {
+ match.url = 'mailto:' + match.url
+ }
+}
+
+/**
+ * LinkifyIt#onCompile()
+ *
+ * Override to modify basic RegExp-s.
+ **/
+LinkifyIt.prototype.onCompile = function onCompile () {
+}
+
+export default LinkifyIt
diff --git a/node_modules/linkify-it/package.json b/node_modules/linkify-it/package.json
new file mode 100644
index 0000000..ae3b7c1
--- /dev/null
+++ b/node_modules/linkify-it/package.json
@@ -0,0 +1,58 @@
+{
+ "name": "linkify-it",
+ "version": "5.0.0",
+ "description": "Links recognition library with FULL unicode support",
+ "keywords": [
+ "linkify",
+ "linkifier",
+ "autolink",
+ "autolinker"
+ ],
+ "repository": "markdown-it/linkify-it",
+ "main": "build/index.cjs.js",
+ "module": "index.mjs",
+ "exports": {
+ ".": {
+ "require": "./build/index.cjs.js",
+ "import": "./index.mjs"
+ },
+ "./*": {
+ "require": "./*",
+ "import": "./*"
+ }
+ },
+ "files": [
+ "index.mjs",
+ "lib/",
+ "build/"
+ ],
+ "license": "MIT",
+ "scripts": {
+ "lint": "eslint .",
+ "test": "npm run lint && npm run build && c8 --exclude build --exclude test -r text -r html -r lcov mocha",
+ "demo": "npm run lint && node support/build_demo.mjs",
+ "doc": "node support/build_doc.mjs",
+ "build": "rollup -c support/rollup.config.mjs",
+ "gh-pages": "npm run demo && npm run doc && shx cp -R doc/ demo/ && gh-pages -d demo -f",
+ "prepublishOnly": "npm run lint && npm run build && npm run gh-pages"
+ },
+ "dependencies": {
+ "uc.micro": "^2.0.0"
+ },
+ "devDependencies": {
+ "@rollup/plugin-node-resolve": "^15.2.3",
+ "ansi": "^0.3.0",
+ "benchmark": "^2.1.0",
+ "c8": "^8.0.1",
+ "eslint": "^8.54.0",
+ "eslint-config-standard": "^17.1.0",
+ "gh-pages": "^6.1.0",
+ "mdurl": "^2.0.0",
+ "mocha": "^10.2.0",
+ "ndoc": "^6.0.0",
+ "rollup": "^4.6.1",
+ "shelljs": "^0.8.4",
+ "shx": "^0.3.2",
+ "tlds": "^1.166.0"
+ }
+}
diff --git a/node_modules/lunr/.eslintrc.json b/node_modules/lunr/.eslintrc.json
new file mode 100644
index 0000000..c0aef7f
--- /dev/null
+++ b/node_modules/lunr/.eslintrc.json
@@ -0,0 +1,86 @@
+{
+ "env": {
+ "browser": true,
+ "node": true
+ },
+
+ "globals": {
+ "lunr": true
+ },
+
+ "extends": "eslint:recommended",
+
+ "plugins": [
+ "spellcheck"
+ ],
+
+ "rules": {
+ "spellcheck/spell-checker": [1,
+ {
+ "lang": "en_GB",
+ "skipWords": [
+ "lunr", "val", "param", "idx", "utils", "namespace", "eslint", "latin",
+ "str", "len", "sqrt", "wildcard", "concat", "metadata", "fn", "params",
+ "lexeme", "lex", "pos", "typedef", "wildcards", "lexemes", "fns", "stemmer",
+ "attrs", "tf", "idf", "lookups", "whitelist", "whitelisted", "tokenizer",
+ "whitespace", "automata", "i", "obj", "anymore", "lexer", "var", "refs",
+ "serializable", "tis", "twas", "int", "args", "unshift", "plugins", "upsert",
+ "upserting", "readonly", "baz", "tokenization", "lunrjs", "com", "olivernn",
+ "github", "js"
+ ]
+ }
+ ],
+
+ "no-constant-condition": [
+ "error",
+ { "checkLoops": false }
+ ],
+
+ "no-redeclare": "off",
+
+ "dot-location": [
+ "error",
+ "property"
+ ],
+
+ "no-alert": "error",
+ "no-caller": "error",
+ "no-eval": "error",
+ "no-implied-eval": "error",
+ "no-extend-native": "error",
+ "no-implicit-globals": "error",
+ "no-multi-spaces": "error",
+ "array-bracket-spacing": "error",
+ "block-spacing": "error",
+
+ "brace-style": [
+ "error",
+ "1tbs",
+ { "allowSingleLine": true }
+ ],
+
+ "camelcase": "error",
+ "comma-dangle": "error",
+ "comma-spacing": "error",
+ "comma-style": "error",
+ "computed-property-spacing": "error",
+ "func-style": "error",
+
+ "indent": [
+ "error",
+ 2,
+ { "VariableDeclarator": 2, "SwitchCase": 1 }
+ ],
+
+ "key-spacing": "error",
+ "keyword-spacing": "error",
+ "linebreak-style": "error",
+ "new-cap": "error",
+ "no-trailing-spaces": "error",
+ "no-whitespace-before-property": "error",
+ "semi": ["error", "never"],
+ "space-before-function-paren": ["error", "always"],
+ "space-in-parens": "error",
+ "space-infix-ops": "error"
+ }
+}
diff --git a/node_modules/lunr/.npmignore b/node_modules/lunr/.npmignore
new file mode 100644
index 0000000..dee8632
--- /dev/null
+++ b/node_modules/lunr/.npmignore
@@ -0,0 +1,3 @@
+/node_modules
+docs/
+test/env/file_list.json
diff --git a/node_modules/lunr/.travis.yml b/node_modules/lunr/.travis.yml
new file mode 100644
index 0000000..106e641
--- /dev/null
+++ b/node_modules/lunr/.travis.yml
@@ -0,0 +1,14 @@
+language: node_js
+node_js:
+ - "node"
+ - "6"
+ - "5"
+ - "4"
+script: "make"
+addons:
+ artifacts:
+ branch: master
+ paths:
+ - ./docs
+ target_paths: /docs
+
diff --git a/node_modules/lunr/CHANGELOG.md b/node_modules/lunr/CHANGELOG.md
new file mode 100644
index 0000000..e2a7ca6
--- /dev/null
+++ b/node_modules/lunr/CHANGELOG.md
@@ -0,0 +1,270 @@
+# Changelog
+
+## 2.3.9
+
+* Fix bug [#469](https://github.com/olivernn/lunr.js/issues/469) where a union with a complete set returned a non-complete set. Thanks [Bertrand Le Roy](https://github.com/bleroy) for reporting and fixing.
+
+## 2.3.8
+
+* Fix bug [#422](https://github.com/olivernn/lunr.js/issues/422) where a pipline function that returned null was not skipping the token as described in the documentation. Thanks [Stephen Cleary](https://github.com/StephenCleary) and [Rob Hoelz](https://github.com/hoelzro) for reporting and investigating.
+
+## 2.3.7
+
+* Fix bug [#417](https://github.com/olivernn/lunr.js/issues/417) where leading white space would cause token position metadata to be reported incorrectly. Thanks [Rob Hoelz](https://github.com/hoelzro) for the fix.
+
+## 2.3.6
+
+* Fix bug [#390](https://github.com/olivernn/lunr.js/issues/390) with fuzzy matching that meant deletions at the end of a word would not match. Thanks [Luca Ongaro](https://github.com/lucaong) for reporting.
+
+## 2.3.5
+
+* Fix bug [#375](https://github.com/olivernn/lunr.js/issues/375) with fuzzy matching that meant insertions at the end of a word would not match. Thanks [Luca Ongaro](https://github.com/lucaong) for reporting and to [Rob Hoelz](https://github.com/hoelzro) for providing a fix.
+* Switch to using `Array.isArray` when checking for results from pipeline functions to support `vm.runInContext`, [#381](https://github.com/olivernn/lunr.js/pull/381) thanks [Rob Hoelz](https://github.com/hoelzro).
+
+## 2.3.4
+
+* Ensure that [inverted index is prototype-less](https://github.com/olivernn/lunr.js/pull/378) after serialization, thanks [Rob Hoelz](https://github.com/hoelzro).
+
+## 2.3.3
+
+* Fig bugs [#270](https://github.com/olivernn/lunr.js/issues/270) and [#368](https://github.com/olivernn/lunr.js/issues/368), some wildcard searches over long tokens could be extremely slow, potentially pinning the current thread indefinitely. Thanks [Kyle Spearrin](https://github.com/kspearrin) and [Mohamed Eltuhamy](https://github.com/meltuhamy) for reporting.
+
+## 2.3.2
+
+* Fix bug [#369](https://github.com/olivernn/lunr.js/issues/369) in parsing queries that include either a boost or edit distance modifier followed by a presence modifier on a subsequent term. Thanks [mtdjr](https://github.com/mtdjr) for reporting.
+
+## 2.3.1
+
+* Add workaround for inconsistent browser behaviour [#279](https://github.com/olivernn/lunr.js/issues/279), thanks [Luca Ongaro](https://github.com/lucaong).
+* Fix bug in intersect/union of `lunr.Set` [#360](https://github.com/olivernn/lunr.js/issues/360), thanks [Brandon Bethke](https://github.com/brandon-bethke-neudesic) for reporting.
+
+## 2.3.0
+
+* Add support for build time field and document boosts.
+* Add support for indexing nested document fields using field extractors.
+* Prevent usage of problematic characters in field names, thanks [Stephane Mankowski](https://github.com/miraks31).
+* Fix bug when using an array of tokens in a single query term, thanks [Michael Manukyan](https://github.com/mike1808).
+
+## 2.2.1
+
+* Fix bug [#344](https://github.com/olivernn/lunr.js/issues/344) in logic for required terms in multiple fields, thanks [Stephane Mankowski](https://github.com/miraks31).
+* Upgrade mocha and fix some test snafus.
+
+## 2.2.0
+
+* Add support for queries with term presence, e.g. required terms and prohibited terms.
+* Add support for using the output of `lunr.tokenizer` directly with `lunr.Query#term`.
+* Add field name metadata to tokens in build and search pipelines.
+* Fix documentation for `lunr.Index` constructor, thanks [Michael Manukyan](https://github.com/mike1808).
+
+## 2.1.6
+
+* Improve pipeline performance for large fields [#329](https://github.com/olivernn/lunr.js/pull/329), thanks [andymcm](https://github.com/andymcm).
+
+## 2.1.5
+
+* Fix bug [#320](https://github.com/olivernn/lunr.js/issues/320) which caused result metadata to be nested under search term instead of field name. Thanks [Jonny Gerig Meyer](https://github.com/jgerigmeyer) for reporting and fixing.
+
+## 2.1.4
+
+* Cache inverse document calculation during build to improve build performance.
+* Introduce new method for combining term metadata at search time.
+* Improve performance of searches with duplicate search terms.
+* Tweaks to build process.
+
+## 2.1.3
+
+* Remove private tag from `lunr.Builder#build`, it should be public, thanks [Sean Tan](https://github.com/seantanly).
+
+## 2.1.2
+
+* Fix bug [#282](https://github.com/olivernn/lunr.js/issues/282) which caused metadata stored in the index to be mutated during search, thanks [Andrew Aldridge](https://github.com/i80and).
+
+## 2.1.1
+
+* Fix bug [#280](https://github.com/olivernn/lunr.js/issues/280) in builder where an object with prototype was being used as a Map, thanks [Pete Bacon Darwin](https://github.com/petebacondarwin).
+
+## 2.1.0
+
+* Improve handling of term boosts across multiple fields [#263](https://github.com/olivernn/lunr.js/issues/263)
+* Enable escaping of special characters when performing a search [#271](https://github.com/olivernn/lunr.js/issues/271)
+* Add ability to programatically include leading and trailing wildcards when performing a query.
+
+## 2.0.4
+
+* Fix bug in IDF calculation that meant the weight for common words was not correctly calculated.
+
+## 2.0.3
+
+* Fix bug [#256](https://github.com/olivernn/lunr.js/issues/256) where duplicate query terms could cause a 'duplicate index' error when building the query vector. Thanks [Bjorn Svensson](https://github.com/bsvensson), [Jason Feng](https://github.com/IYCI), and [et1421](https://github.com/et1421) for reporting and confirming the issue.
+
+## 2.0.2
+
+* Fix bug [#255](https://github.com/olivernn/lunr.js/issues/255) where search queries used a different separator than the tokeniser causing some terms to not be searchable. Thanks [Wes Cossick](https://github.com/WesCossick) for reporting.
+* Reduce precision of term scores stored in document vectors to reduce the size of serialised indexes by ~15%, thanks [Qvatra](https://github.com/Qvatra) for the idea.
+
+## 2.0.1
+
+* Fix regression [#254](https://github.com/olivernn/lunr.js/issues/254) where documents containing terms that match properties from Object.prototype cause errors during indexing. Thanks [VonFry](https://github.com/VonFry) for reporting.
+
+## 2.0.0
+
+* Indexes are now immutable, this allows for more space efficient indexes, more advanced searching and better performance.
+* Text processing can now attach metadata to tokens the enter the index, this opens up the possibility of highlighting search terms in results.
+* More advanced searching including search time field boosts, search by field, fuzzy matching and leading and trailing wildcards.
+
+## 1.0.0
+
+* Deprecate incorrectly spelled lunr.tokenizer.separator.
+* No other changes, but bumping to 1.0.0 because it's overdue, and the interfaces are pretty stable now. It also paves the way for 2.0.0...
+
+## 0.7.2
+
+* Fix bug when loading a serialised tokeniser [#226](https://github.com/olivernn/lunr.js/issues/226), thanks [Alex Turpin](https://github.com/alexturpin) for reporting the issue.
+* Learn how to spell separator [#223](https://github.com/olivernn/lunr.js/pull/223), thanks [peterennis](https://github.com/peterennis) for helping me learn to spell.
+
+## 0.7.1
+
+* Correctly set the license using the @license doc tag [#217](https://github.com/olivernn/lunr.js/issues/217), thanks [Carlos Araya](https://github.com/caraya).
+
+## 0.7.0
+
+* Make tokenizer a property of the index, allowing for different indexes to use different tokenizers [#205](https://github.com/olivernn/lunr.js/pull/205) and [#21](https://github.com/olivernn/lunr.js/issues/21).
+* Fix bug that prevented very large documents from being indexed [#203](https://github.com/olivernn/lunr.js/pull/203), thanks [Daniel Grießhaber](https://github.com/dangrie158).
+* Performance improvements when adding documents to the index [#208](https://github.com/olivernn/lunr.js/pull/208), thanks [Dougal Matthews](https://github.com/d0ugal).
+
+## 0.6.0
+
+* Ensure document ref property type is preserved when returning results [#117](https://github.com/olivernn/lunr.js/issues/117), thanks [Kyle Kirby](https://github.com/kkirby).
+* Introduce `lunr.generateStopWordFilter` for generating a stop word filter from a provided list of stop words.
+* Replace array-like string access with ES3 compatible `String.prototype.charAt` [#186](https://github.com/olivernn/lunr.js/pull/186), thanks [jkellerer](https://github.com/jkellerer).
+* Move empty string filtering from `lunr.trimmer` to `lunr.Pipeline.prototype.run` so that empty tokens do not enter the index, regardless of the trimmer being used [#178](https://github.com/olivernn/lunr.js/issues/178), [#177](https://github.com/olivernn/lunr.js/issues/177) and [#174](https://github.com/olivernn/lunr.js/issues/174)
+* Allow tokenization of arrays with null and non string elements [#172](https://github.com/olivernn/lunr.js/issues/172).
+* Parameterize the seperator used by `lunr.tokenizer`, fixes [#102](https://github.com/olivernn/lunr.js/issues/102).
+
+## 0.5.12
+
+* Implement `lunr.stopWordFilter` with an object instead of using `lunr.SortedSet` [#170](https://github.com/olivernn/lunr.js/pull/170), resulting in a performance boost for the text processing pipeline, thanks to [Brian Vaughn](https://github.com/bvaughn).
+* Ensure that `lunr.trimmer` does not introduce empty tokens into the index, [#166](https://github.com/olivernn/lunr.js/pull/166), thanks to [janeisklar](https://github.com/janeisklar)
+
+## 0.5.11
+
+* Fix [bug](https://github.com/olivernn/lunr.js/issues/162) when using the unminified build of lunr in some project builds, thanks [Alessio Michelini](https://github.com/darkmavis1980)
+
+## 0.5.10
+
+* Fix bug in IDF calculation, thanks to [weixsong](https://github.com/weixsong) for discovering the issue.
+* Documentation fixes [#111](https://github.com/olivernn/lunr.js/pull/111) thanks [Chris Van](https://github.com/cvan).
+* Remove version from bower.json as it is not needed [#160](https://github.com/olivernn/lunr.js/pull/160), thanks [Kevin Kirsche](https://github.com/kkirsche)
+* Fix link to augment.js on the home page [#159](https://github.com/olivernn/lunr.js/issues/159), thanks [Gábor Nádai](https://github.com/mefiblogger)
+
+## 0.5.9
+
+* Remove recursion from SortedSet#indexOf and SortedSet#locationFor to gain small performance gains in Index#search and Index#add
+* Fix incorrect handling of non existant functions when adding/removing from a Pipeline [#146](https://github.com/olivernn/lunr.js/issues/146) thanks to [weixsong](https://github.com/weixsong)
+
+## 0.5.8
+
+* Fix typo when referencing Martin Porter's home page http://tartarus.org/~martin/ [#132](https://github.com/olivernn/lunr.js/pull/132) thanks [James Aylett](https://github.com/jaylett)
+* Performance improvement for tokenizer [#139](https://github.com/olivernn/lunr.js/pull/139) thanks [Arun Srinivasan](https://github.com/satchmorun)
+* Fix vector magnitude caching bug :flushed: [#142](https://github.com/olivernn/lunr.js/pull/142) thanks [Richard Poole](https://github.com/richardpoole)
+* Fix vector insertion bug that prevented lesser ordered nodes to be inserted into a vector [#143](https://github.com/olivernn/lunr.js/pull/143) thanks [Richard Poole](https://github.com/richardpoole)
+* Fix inefficient use of arguments in SortedSet add method, thanks to [Max Nordlund](https://github.com/maxnordlund).
+* Fix deprecated use of path.exists in test server [#141](https://github.com/olivernn/lunr.js/pull/141) thanks [wei song](https://github.com/weixsong)
+
+## 0.5.7
+
+* Performance improvement for stemmer [#124](https://github.com/olivernn/lunr.js/pull/124) thanks [Tony Jacobs](https://github.com/tony-jacobs)
+
+## 0.5.6
+
+* Performance improvement when add documents to the index [#114](https://github.com/olivernn/lunr.js/pull/114) thanks [Alex Holmes](https://github.com/alex2)
+
+## 0.5.5
+
+* Fix bug in tokenizer introduced in 0.5.4 [#101](https://github.com/olivernn/lunr.js/pull/101) thanks [Nolan Lawson](https://github.com/nolanlawson)
+
+## 0.5.4
+
+* Tokenizer also splits on hyphens [#98](https://github.com/olivernn/lunr.js/pull/98/files) thanks [Nolan Lawson](https://github.com/nolanlawson)
+
+## 0.5.3
+
+* Correctly stem words ending with the letter 'y' [#84](https://github.com/olivernn/lunr.js/pull/84) thanks [Mihai Valentin](https://github.com/MihaiValentin)
+* Improve build tools and dev dependency installation [#78](https://github.com/olivernn/lunr.js/pull/78) thanks [Ben Pickles](https://github.com/benpickles)
+
+## 0.5.2
+
+* Use npm they said, it'll be easy they said.
+
+## 0.5.1
+
+* Because [npm issues](https://github.com/olivernn/lunr.js/issues/77) :(
+
+## 0.5.0
+
+* Add plugin support to enable i18n and other extensions to lunr.
+* Add AMD support [#72](https://github.com/olivernn/lunr.js/issues/72) thanks [lnwdr](https://github.com/lnwdr).
+* lunr.Vector now implemented using linked lists for better performance especially in indexes with large numbers of unique tokens.
+* Build system clean up.
+
+## 0.4.5
+
+* Fix performance regression introduced in 0.4.4 by fixing #64.
+
+## 0.4.4
+
+* Fix bug [#64](https://github.com/olivernn/lunr.js/issues/64) idf cache should handle tokens with the same name as object properties, thanks [gitgrimbo](https://github.com/gitgrimbo).
+* Intersperse source files with a semicolon as part of the build process, fixes [#61](https://github.com/olivernn/lunr.js/issues/61), thanks [shyndman](https://github.com/shyndman).
+
+## 0.4.3
+
+* Fix bug [#49](https://github.com/olivernn/lunr.js/issues/49) tokenizer should handle null and undefined as arguments, thanks [jona](https://github.com/jona).
+
+## 0.4.2
+
+* Fix bug [#47](https://github.com/olivernn/lunr.js/issues/47) tokenizer converts its input to a string before trying to split it into tokens, thanks [mikhailkozlov](https://github.com/mikhailkozlov).
+
+## 0.4.1
+
+* Fix bug [#41](https://github.com/olivernn/lunr.js/issues/41) that caused issues when indexing mixed case tags, thanks [Aptary](https://github.com/Aptary)
+
+## 0.4.0
+
+* Add index mutation events ('add', 'update' and 'remove').
+* Performance improvements to searching.
+* Penalise non-exact matches so exact matches are better ranked than expanded matches.
+
+## 0.3.3
+
+* Fix bug [#32](https://github.com/olivernn/lunr.js/pull/32) which prevented lunr being used where a `console` object is not present, thanks [Tony Marklove](https://github.com/jjbananas) and [wyuenho](https://github.com/wyuenho)
+
+## 0.3.2
+
+* Fix bug [#27](https://github.com/olivernn/lunr.js/pull/27) when trying to calculate tf with empty fields, thanks [Gambhiro](https://github.com/gambhiro)
+
+## 0.3.1
+
+* Fix bug [#24](https://github.com/olivernn/lunr.js/pull/24) that caused an error when trying to remove a non-existant document from the index, thanks [Jesús Leganés Combarro](https://github.com/piranna)
+
+## 0.3.0
+
+* Implement [JSON serialisation](https://github.com/olivernn/lunr.js/pull/14), allows indexes to be loaded and dumped, thanks [ssured](https://github.com/ssured).
+* Performance improvements to searching and indexing.
+* Fix bug [#15](https://github.com/olivernn/lunr.js/pull/15) with tokeniser that added stray empty white space to the index, thanks [ssured](https://github.com/ssured).
+
+## 0.2.3
+
+* Fix issue with searching for a term not in the index [#12](https://github.com/olivernn/lunr.js/issues/12), thanks [mcnerthney](https://github.com/mcnerthney) and [makoto](https://github.com/makoto)
+
+## 0.2.2
+
+* Boost exact term matches so they are better ranked than expanded term matches, fixes [#10](https://github.com/olivernn/lunr.js/issues/10), thanks [ssured](https://github.com/ssured)
+
+## 0.2.1
+
+* Changes to the build process.
+* Add component.json and package.json
+* Add phantomjs test runner
+* Remove redundant attributes
+* Many [spelling corrections](https://github.com/olivernn/lunr.js/pull/8), thanks [Pascal Borreli](https://github.com/pborreli)
diff --git a/node_modules/lunr/CNAME b/node_modules/lunr/CNAME
new file mode 100644
index 0000000..d162cd8
--- /dev/null
+++ b/node_modules/lunr/CNAME
@@ -0,0 +1 @@
+lunrjs.com
diff --git a/node_modules/lunr/CONTRIBUTING.md b/node_modules/lunr/CONTRIBUTING.md
new file mode 100644
index 0000000..2f4441f
--- /dev/null
+++ b/node_modules/lunr/CONTRIBUTING.md
@@ -0,0 +1,20 @@
+Contributions are very welcome. To make the process as easy as possible please follow these steps:
+
+* Open an issue detailing the bug you've found, or the feature you wish to add. Simplified working examples using something like [jsFiddle](http://jsfiddle.net) make it easier to diagnose your problem.
+* Add tests for your code (so I don't accidentally break it in the future).
+* Don't change version numbers or make new builds as part of your changes.
+* Don't change the built versions of the library; only make changes to code in the `lib` directory.
+
+# Developer Dependencies
+
+A JavaScript runtime is required for building the library.
+
+Run the tests (using PhantomJS):
+
+ make test
+
+The tests can also be run in the browser by starting the test server:
+
+ make server
+
+This will start a server on port 3000, the tests are then available at `/test`.
diff --git a/node_modules/lunr/LICENSE b/node_modules/lunr/LICENSE
new file mode 100644
index 0000000..e6e4e21
--- /dev/null
+++ b/node_modules/lunr/LICENSE
@@ -0,0 +1,19 @@
+Copyright (C) 2013 by Oliver Nightingale
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/node_modules/lunr/Makefile b/node_modules/lunr/Makefile
new file mode 100644
index 0000000..37c7ce7
--- /dev/null
+++ b/node_modules/lunr/Makefile
@@ -0,0 +1,90 @@
+
+SRC = lib/lunr.js \
+ lib/utils.js \
+ lib/field_ref.js \
+ lib/set.js \
+ lib/idf.js \
+ lib/token.js \
+ lib/tokenizer.js \
+ lib/pipeline.js \
+ lib/vector.js \
+ lib/stemmer.js \
+ lib/stop_word_filter.js \
+ lib/trimmer.js \
+ lib/token_set.js \
+ lib/token_set_builder.js \
+ lib/index.js \
+ lib/builder.js \
+ lib/match_data.js \
+ lib/query.js \
+ lib/query_parse_error.js \
+ lib/query_lexer.js \
+ lib/query_parser.js \
+
+YEAR = $(shell date +%Y)
+VERSION = $(shell cat VERSION)
+
+NODE ?= $(shell which node)
+NPM ?= $(shell which npm)
+UGLIFYJS ?= ./node_modules/.bin/uglifyjs
+MOCHA ?= ./node_modules/.bin/mocha
+MUSTACHE ?= ./node_modules/.bin/mustache
+ESLINT ?= ./node_modules/.bin/eslint
+JSDOC ?= ./node_modules/.bin/jsdoc
+NODE_STATIC ?= ./node_modules/.bin/static
+
+all: test lint docs
+release: lunr.js lunr.min.js bower.json package.json component.json docs
+
+lunr.js: $(SRC)
+ cat build/wrapper_start $^ build/wrapper_end | \
+ sed "s/@YEAR/${YEAR}/" | \
+ sed "s/@VERSION/${VERSION}/" > $@
+
+lunr.min.js: lunr.js
+ ${UGLIFYJS} --compress --mangle --comments < $< > $@
+
+%.json: build/%.json.template
+ cat $< | sed "s/@VERSION/${VERSION}/" > $@
+
+size: lunr.min.js
+ @gzip -c lunr.min.js | wc -c
+
+server: test/index.html
+ ${NODE_STATIC} -a 0.0.0.0 -H '{"Cache-Control": "no-cache, must-revalidate"}'
+
+lint: $(SRC)
+ ${ESLINT} $^
+
+perf/*_perf.js:
+ ${NODE} -r ./perf/perf_helper.js $@
+
+benchmark: perf/*_perf.js
+
+test: node_modules lunr.js
+ ${MOCHA} test/*.js -u tdd -r test/test_helper.js -R dot -C
+
+test/inspect: node_modules lunr.js
+ ${MOCHA} test/*.js -u tdd -r test/test_helper.js -R dot -C --inspect-brk=0.0.0.0:9292
+
+test/env/file_list.json: $(wildcard test/*test.js)
+ ${NODE} -p 'JSON.stringify({test_files: process.argv.slice(1)})' $^ > $@
+
+test/index.html: test/env/file_list.json test/env/index.mustache
+ ${MUSTACHE} $^ > $@
+
+docs: $(SRC)
+ ${JSDOC} -R README.md -d docs -c build/jsdoc.conf.json $^
+
+clean:
+ rm -f lunr{.min,}.js
+ rm -rf docs
+ rm *.json
+
+reset:
+ git checkout lunr.* *.json
+
+node_modules: package.json
+ ${NPM} -s install
+
+.PHONY: test clean docs reset perf/*_perf.js test/inspect
diff --git a/node_modules/lunr/README.md b/node_modules/lunr/README.md
new file mode 100644
index 0000000..ea7ace6
--- /dev/null
+++ b/node_modules/lunr/README.md
@@ -0,0 +1,78 @@
+# Lunr.js
+
+[](https://gitter.im/olivernn/lunr.js?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+
+[](https://travis-ci.org/olivernn/lunr.js)
+
+A bit like Solr, but much smaller and not as bright.
+
+## Example
+
+A very simple search index can be created using the following:
+
+```javascript
+var idx = lunr(function () {
+ this.field('title')
+ this.field('body')
+
+ this.add({
+ "title": "Twelfth-Night",
+ "body": "If music be the food of love, play on: Give me excess of it…",
+ "author": "William Shakespeare",
+ "id": "1"
+ })
+})
+```
+
+Then searching is as simple as:
+
+```javascript
+idx.search("love")
+```
+
+This returns a list of matching documents with a score of how closely they match the search query as well as any associated metadata about the match:
+
+```javascript
+[
+ {
+ "ref": "1",
+ "score": 0.3535533905932737,
+ "matchData": {
+ "metadata": {
+ "love": {
+ "body": {}
+ }
+ }
+ }
+ }
+]
+```
+
+[API documentation](https://lunrjs.com/docs/index.html) is available, as well as a [full working example](https://olivernn.github.io/moonwalkers/).
+
+## Description
+
+Lunr.js is a small, full-text search library for use in the browser. It indexes JSON documents and provides a simple search interface for retrieving documents that best match text queries.
+
+## Why
+
+For web applications with all their data already sitting in the client, it makes sense to be able to search that data on the client too. It saves adding extra, compacted services on the server. A local search index will be quicker, there is no network overhead, and will remain available and usable even without a network connection.
+
+## Installation
+
+Simply include the lunr.js source file in the page that you want to use it. Lunr.js is supported in all modern browsers.
+
+Alternatively an npm package is also available `npm install lunr`.
+
+Browsers that do not support ES5 will require a JavaScript shim for Lunr to work. You can either use [Augment.js](https://github.com/olivernn/augment.js), [ES5-Shim](https://github.com/kriskowal/es5-shim) or any library that patches old browsers to provide an ES5 compatible JavaScript environment.
+
+## Features
+
+* Full text search support for 14 languages
+* Boost terms at query time or boost entire documents at index time
+* Scope searches to specific fields
+* Fuzzy term matching with wildcards or edit distance
+
+## Contributing
+
+See the [`CONTRIBUTING.md` file](CONTRIBUTING.md).
diff --git a/node_modules/lunr/VERSION b/node_modules/lunr/VERSION
new file mode 100644
index 0000000..d63fa57
--- /dev/null
+++ b/node_modules/lunr/VERSION
@@ -0,0 +1 @@
+2.3.9
\ No newline at end of file
diff --git a/node_modules/lunr/bower.json b/node_modules/lunr/bower.json
new file mode 100644
index 0000000..16d6d24
--- /dev/null
+++ b/node_modules/lunr/bower.json
@@ -0,0 +1,11 @@
+{
+ "name": "lunr.js",
+ "version": "2.3.9",
+ "main": "lunr.js",
+ "ignore": [
+ "tests/",
+ "perf/",
+ "build/",
+ "docs/"
+ ]
+}
diff --git a/node_modules/lunr/component.json b/node_modules/lunr/component.json
new file mode 100644
index 0000000..418b729
--- /dev/null
+++ b/node_modules/lunr/component.json
@@ -0,0 +1,9 @@
+{
+ "name": "lunr",
+ "repo": "olivernn/lunr.js",
+ "version": "2.3.9",
+ "description": "Simple full-text search in your browser.",
+ "license": "MIT",
+ "main": "lunr.js",
+ "scripts": ["lunr.js"]
+}
diff --git a/node_modules/lunr/index.html b/node_modules/lunr/index.html
new file mode 100644
index 0000000..0c55ecf
--- /dev/null
+++ b/node_modules/lunr/index.html
@@ -0,0 +1,305 @@
+
+
+
+ lunr.js - A bit like Solr, but much smaller and not as bright
+
+
+
+
+
+
lunr.js is a simple full text search engine for your client side applications. It is designed to be small, yet full featured, enabling you to provide a great search experience without the need for external, server side, search services.
+
+
lunr.js has no external dependencies, although it does require a modern browser with ES5 support. In older browsers you can use an ES5 shim, such as augment.js, to provide any missing JavaScript functionality.
Every document and search query that enters lunr is passed through a text processing pipeline. The pipeline is simply a stack of functions that perform some processing on the text. Pipeline functions act on the text one token at a time, and what they return is passed to the next function in the pipeline.
+
+
By default lunr adds a stop word filter and stemmer to the pipeline. You can also add your own processors or remove the default ones depending on your requirements. The stemmer currently used is an English language stemmer, which could be replaced with a non-English language stemmer if required, or a Metaphoning processor could be added.
+
+
+
+ var index = lunr(function () {
+ this.pipeline.add(function (token, tokenIndex, tokens) {
+ // text processing in here
+ })
+
+ this.pipeline.after(lunr.stopWordFilter, function (token, tokenIndex, tokens) {
+ // text processing in here
+ })
+ })
+
+
+
Functions in the pipeline are called with three arguments: the current token being processed; the index of that token in the array of tokens, and the whole list of tokens part of the document being processed. This enables simple unigram processing of tokens as well as more sophisticated n-gram processing.
+
+
The function should return the processed version of the text, which will in turn be passed to the next function in the pipeline. Returning undefined will prevent any further processing of the token, and that token will not make it to the index.
+
+
+
+
+
+
+
Tokenization
+
+
+
Tokenization is how lunr converts documents and searches into individual tokens, ready to be run through the text processing pipeline and entered or looked up in the index.
+
+
The default tokenizer included with lunr is designed to handle general english text well, although application, or language specific tokenizers can be used instead.
+
+
+
+
+
Stemming
+
+
+
Stemming increases the recall of the search index by reducing related words down to their stem, so that non-exact search terms still match relevant documents. For example 'search', 'searching' and 'searched' all get reduced to the stem 'search'.
+
+
lunr automatically includes a stemmer based on Martin Porter's algorithms.
+
+
+
+
+
Stop words
+
+
+
Stop words are words that are very common and are not useful in differentiating between documents. These are automatically removed by lunr. This helps to reduce the size of the index and improve search speed and accuracy.
+
+
The default stop word filter contains a large list of very common words in English. For best results a corpus specific stop word filter can also be added to the pipeline. The search algorithm already penalises more common words, but preventing them from entering the index at all can be very beneficial for both space and speed performance.
+
+
+
+
+
+
+
+
+
+
+
diff --git a/node_modules/lunr/lunr.js b/node_modules/lunr/lunr.js
new file mode 100644
index 0000000..6aa370f
--- /dev/null
+++ b/node_modules/lunr/lunr.js
@@ -0,0 +1,3475 @@
+/**
+ * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9
+ * Copyright (C) 2020 Oliver Nightingale
+ * @license MIT
+ */
+
+;(function(){
+
+/**
+ * A convenience function for configuring and constructing
+ * a new lunr Index.
+ *
+ * A lunr.Builder instance is created and the pipeline setup
+ * with a trimmer, stop word filter and stemmer.
+ *
+ * This builder object is yielded to the configuration function
+ * that is passed as a parameter, allowing the list of fields
+ * and other builder parameters to be customised.
+ *
+ * All documents _must_ be added within the passed config function.
+ *
+ * @example
+ * var idx = lunr(function () {
+ * this.field('title')
+ * this.field('body')
+ * this.ref('id')
+ *
+ * documents.forEach(function (doc) {
+ * this.add(doc)
+ * }, this)
+ * })
+ *
+ * @see {@link lunr.Builder}
+ * @see {@link lunr.Pipeline}
+ * @see {@link lunr.trimmer}
+ * @see {@link lunr.stopWordFilter}
+ * @see {@link lunr.stemmer}
+ * @namespace {function} lunr
+ */
+var lunr = function (config) {
+ var builder = new lunr.Builder
+
+ builder.pipeline.add(
+ lunr.trimmer,
+ lunr.stopWordFilter,
+ lunr.stemmer
+ )
+
+ builder.searchPipeline.add(
+ lunr.stemmer
+ )
+
+ config.call(builder, builder)
+ return builder.build()
+}
+
+lunr.version = "2.3.9"
+/*!
+ * lunr.utils
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * A namespace containing utils for the rest of the lunr library
+ * @namespace lunr.utils
+ */
+lunr.utils = {}
+
+/**
+ * Print a warning message to the console.
+ *
+ * @param {String} message The message to be printed.
+ * @memberOf lunr.utils
+ * @function
+ */
+lunr.utils.warn = (function (global) {
+ /* eslint-disable no-console */
+ return function (message) {
+ if (global.console && console.warn) {
+ console.warn(message)
+ }
+ }
+ /* eslint-enable no-console */
+})(this)
+
+/**
+ * Convert an object to a string.
+ *
+ * In the case of `null` and `undefined` the function returns
+ * the empty string, in all other cases the result of calling
+ * `toString` on the passed object is returned.
+ *
+ * @param {Any} obj The object to convert to a string.
+ * @return {String} string representation of the passed object.
+ * @memberOf lunr.utils
+ */
+lunr.utils.asString = function (obj) {
+ if (obj === void 0 || obj === null) {
+ return ""
+ } else {
+ return obj.toString()
+ }
+}
+
+/**
+ * Clones an object.
+ *
+ * Will create a copy of an existing object such that any mutations
+ * on the copy cannot affect the original.
+ *
+ * Only shallow objects are supported, passing a nested object to this
+ * function will cause a TypeError.
+ *
+ * Objects with primitives, and arrays of primitives are supported.
+ *
+ * @param {Object} obj The object to clone.
+ * @return {Object} a clone of the passed object.
+ * @throws {TypeError} when a nested object is passed.
+ * @memberOf Utils
+ */
+lunr.utils.clone = function (obj) {
+ if (obj === null || obj === undefined) {
+ return obj
+ }
+
+ var clone = Object.create(null),
+ keys = Object.keys(obj)
+
+ for (var i = 0; i < keys.length; i++) {
+ var key = keys[i],
+ val = obj[key]
+
+ if (Array.isArray(val)) {
+ clone[key] = val.slice()
+ continue
+ }
+
+ if (typeof val === 'string' ||
+ typeof val === 'number' ||
+ typeof val === 'boolean') {
+ clone[key] = val
+ continue
+ }
+
+ throw new TypeError("clone is not deep and does not support nested objects")
+ }
+
+ return clone
+}
+lunr.FieldRef = function (docRef, fieldName, stringValue) {
+ this.docRef = docRef
+ this.fieldName = fieldName
+ this._stringValue = stringValue
+}
+
+lunr.FieldRef.joiner = "/"
+
+lunr.FieldRef.fromString = function (s) {
+ var n = s.indexOf(lunr.FieldRef.joiner)
+
+ if (n === -1) {
+ throw "malformed field ref string"
+ }
+
+ var fieldRef = s.slice(0, n),
+ docRef = s.slice(n + 1)
+
+ return new lunr.FieldRef (docRef, fieldRef, s)
+}
+
+lunr.FieldRef.prototype.toString = function () {
+ if (this._stringValue == undefined) {
+ this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef
+ }
+
+ return this._stringValue
+}
+/*!
+ * lunr.Set
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * A lunr set.
+ *
+ * @constructor
+ */
+lunr.Set = function (elements) {
+ this.elements = Object.create(null)
+
+ if (elements) {
+ this.length = elements.length
+
+ for (var i = 0; i < this.length; i++) {
+ this.elements[elements[i]] = true
+ }
+ } else {
+ this.length = 0
+ }
+}
+
+/**
+ * A complete set that contains all elements.
+ *
+ * @static
+ * @readonly
+ * @type {lunr.Set}
+ */
+lunr.Set.complete = {
+ intersect: function (other) {
+ return other
+ },
+
+ union: function () {
+ return this
+ },
+
+ contains: function () {
+ return true
+ }
+}
+
+/**
+ * An empty set that contains no elements.
+ *
+ * @static
+ * @readonly
+ * @type {lunr.Set}
+ */
+lunr.Set.empty = {
+ intersect: function () {
+ return this
+ },
+
+ union: function (other) {
+ return other
+ },
+
+ contains: function () {
+ return false
+ }
+}
+
+/**
+ * Returns true if this set contains the specified object.
+ *
+ * @param {object} object - Object whose presence in this set is to be tested.
+ * @returns {boolean} - True if this set contains the specified object.
+ */
+lunr.Set.prototype.contains = function (object) {
+ return !!this.elements[object]
+}
+
+/**
+ * Returns a new set containing only the elements that are present in both
+ * this set and the specified set.
+ *
+ * @param {lunr.Set} other - set to intersect with this set.
+ * @returns {lunr.Set} a new set that is the intersection of this and the specified set.
+ */
+
+lunr.Set.prototype.intersect = function (other) {
+ var a, b, elements, intersection = []
+
+ if (other === lunr.Set.complete) {
+ return this
+ }
+
+ if (other === lunr.Set.empty) {
+ return other
+ }
+
+ if (this.length < other.length) {
+ a = this
+ b = other
+ } else {
+ a = other
+ b = this
+ }
+
+ elements = Object.keys(a.elements)
+
+ for (var i = 0; i < elements.length; i++) {
+ var element = elements[i]
+ if (element in b.elements) {
+ intersection.push(element)
+ }
+ }
+
+ return new lunr.Set (intersection)
+}
+
+/**
+ * Returns a new set combining the elements of this and the specified set.
+ *
+ * @param {lunr.Set} other - set to union with this set.
+ * @return {lunr.Set} a new set that is the union of this and the specified set.
+ */
+
+lunr.Set.prototype.union = function (other) {
+ if (other === lunr.Set.complete) {
+ return lunr.Set.complete
+ }
+
+ if (other === lunr.Set.empty) {
+ return this
+ }
+
+ return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements)))
+}
+/**
+ * A function to calculate the inverse document frequency for
+ * a posting. This is shared between the builder and the index
+ *
+ * @private
+ * @param {object} posting - The posting for a given term
+ * @param {number} documentCount - The total number of documents.
+ */
+lunr.idf = function (posting, documentCount) {
+ var documentsWithTerm = 0
+
+ for (var fieldName in posting) {
+ if (fieldName == '_index') continue // Ignore the term index, its not a field
+ documentsWithTerm += Object.keys(posting[fieldName]).length
+ }
+
+ var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5)
+
+ return Math.log(1 + Math.abs(x))
+}
+
+/**
+ * A token wraps a string representation of a token
+ * as it is passed through the text processing pipeline.
+ *
+ * @constructor
+ * @param {string} [str=''] - The string token being wrapped.
+ * @param {object} [metadata={}] - Metadata associated with this token.
+ */
+lunr.Token = function (str, metadata) {
+ this.str = str || ""
+ this.metadata = metadata || {}
+}
+
+/**
+ * Returns the token string that is being wrapped by this object.
+ *
+ * @returns {string}
+ */
+lunr.Token.prototype.toString = function () {
+ return this.str
+}
+
+/**
+ * A token update function is used when updating or optionally
+ * when cloning a token.
+ *
+ * @callback lunr.Token~updateFunction
+ * @param {string} str - The string representation of the token.
+ * @param {Object} metadata - All metadata associated with this token.
+ */
+
+/**
+ * Applies the given function to the wrapped string token.
+ *
+ * @example
+ * token.update(function (str, metadata) {
+ * return str.toUpperCase()
+ * })
+ *
+ * @param {lunr.Token~updateFunction} fn - A function to apply to the token string.
+ * @returns {lunr.Token}
+ */
+lunr.Token.prototype.update = function (fn) {
+ this.str = fn(this.str, this.metadata)
+ return this
+}
+
+/**
+ * Creates a clone of this token. Optionally a function can be
+ * applied to the cloned token.
+ *
+ * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token.
+ * @returns {lunr.Token}
+ */
+lunr.Token.prototype.clone = function (fn) {
+ fn = fn || function (s) { return s }
+ return new lunr.Token (fn(this.str, this.metadata), this.metadata)
+}
+/*!
+ * lunr.tokenizer
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * A function for splitting a string into tokens ready to be inserted into
+ * the search index. Uses `lunr.tokenizer.separator` to split strings, change
+ * the value of this property to change how strings are split into tokens.
+ *
+ * This tokenizer will convert its parameter to a string by calling `toString` and
+ * then will split this string on the character in `lunr.tokenizer.separator`.
+ * Arrays will have their elements converted to strings and wrapped in a lunr.Token.
+ *
+ * Optional metadata can be passed to the tokenizer, this metadata will be cloned and
+ * added as metadata to every token that is created from the object to be tokenized.
+ *
+ * @static
+ * @param {?(string|object|object[])} obj - The object to convert into tokens
+ * @param {?object} metadata - Optional metadata to associate with every token
+ * @returns {lunr.Token[]}
+ * @see {@link lunr.Pipeline}
+ */
+lunr.tokenizer = function (obj, metadata) {
+ if (obj == null || obj == undefined) {
+ return []
+ }
+
+ if (Array.isArray(obj)) {
+ return obj.map(function (t) {
+ return new lunr.Token(
+ lunr.utils.asString(t).toLowerCase(),
+ lunr.utils.clone(metadata)
+ )
+ })
+ }
+
+ var str = obj.toString().toLowerCase(),
+ len = str.length,
+ tokens = []
+
+ for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) {
+ var char = str.charAt(sliceEnd),
+ sliceLength = sliceEnd - sliceStart
+
+ if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) {
+
+ if (sliceLength > 0) {
+ var tokenMetadata = lunr.utils.clone(metadata) || {}
+ tokenMetadata["position"] = [sliceStart, sliceLength]
+ tokenMetadata["index"] = tokens.length
+
+ tokens.push(
+ new lunr.Token (
+ str.slice(sliceStart, sliceEnd),
+ tokenMetadata
+ )
+ )
+ }
+
+ sliceStart = sliceEnd + 1
+ }
+
+ }
+
+ return tokens
+}
+
+/**
+ * The separator used to split a string into tokens. Override this property to change the behaviour of
+ * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens.
+ *
+ * @static
+ * @see lunr.tokenizer
+ */
+lunr.tokenizer.separator = /[\s\-]+/
+/*!
+ * lunr.Pipeline
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * lunr.Pipelines maintain an ordered list of functions to be applied to all
+ * tokens in documents entering the search index and queries being ran against
+ * the index.
+ *
+ * An instance of lunr.Index created with the lunr shortcut will contain a
+ * pipeline with a stop word filter and an English language stemmer. Extra
+ * functions can be added before or after either of these functions or these
+ * default functions can be removed.
+ *
+ * When run the pipeline will call each function in turn, passing a token, the
+ * index of that token in the original list of all tokens and finally a list of
+ * all the original tokens.
+ *
+ * The output of functions in the pipeline will be passed to the next function
+ * in the pipeline. To exclude a token from entering the index the function
+ * should return undefined, the rest of the pipeline will not be called with
+ * this token.
+ *
+ * For serialisation of pipelines to work, all functions used in an instance of
+ * a pipeline should be registered with lunr.Pipeline. Registered functions can
+ * then be loaded. If trying to load a serialised pipeline that uses functions
+ * that are not registered an error will be thrown.
+ *
+ * If not planning on serialising the pipeline then registering pipeline functions
+ * is not necessary.
+ *
+ * @constructor
+ */
+lunr.Pipeline = function () {
+ this._stack = []
+}
+
+lunr.Pipeline.registeredFunctions = Object.create(null)
+
+/**
+ * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token
+ * string as well as all known metadata. A pipeline function can mutate the token string
+ * or mutate (or add) metadata for a given token.
+ *
+ * A pipeline function can indicate that the passed token should be discarded by returning
+ * null, undefined or an empty string. This token will not be passed to any downstream pipeline
+ * functions and will not be added to the index.
+ *
+ * Multiple tokens can be returned by returning an array of tokens. Each token will be passed
+ * to any downstream pipeline functions and all will returned tokens will be added to the index.
+ *
+ * Any number of pipeline functions may be chained together using a lunr.Pipeline.
+ *
+ * @interface lunr.PipelineFunction
+ * @param {lunr.Token} token - A token from the document being processed.
+ * @param {number} i - The index of this token in the complete list of tokens for this document/field.
+ * @param {lunr.Token[]} tokens - All tokens for this document/field.
+ * @returns {(?lunr.Token|lunr.Token[])}
+ */
+
+/**
+ * Register a function with the pipeline.
+ *
+ * Functions that are used in the pipeline should be registered if the pipeline
+ * needs to be serialised, or a serialised pipeline needs to be loaded.
+ *
+ * Registering a function does not add it to a pipeline, functions must still be
+ * added to instances of the pipeline for them to be used when running a pipeline.
+ *
+ * @param {lunr.PipelineFunction} fn - The function to check for.
+ * @param {String} label - The label to register this function with
+ */
+lunr.Pipeline.registerFunction = function (fn, label) {
+ if (label in this.registeredFunctions) {
+ lunr.utils.warn('Overwriting existing registered function: ' + label)
+ }
+
+ fn.label = label
+ lunr.Pipeline.registeredFunctions[fn.label] = fn
+}
+
+/**
+ * Warns if the function is not registered as a Pipeline function.
+ *
+ * @param {lunr.PipelineFunction} fn - The function to check for.
+ * @private
+ */
+lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) {
+ var isRegistered = fn.label && (fn.label in this.registeredFunctions)
+
+ if (!isRegistered) {
+ lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn)
+ }
+}
+
+/**
+ * Loads a previously serialised pipeline.
+ *
+ * All functions to be loaded must already be registered with lunr.Pipeline.
+ * If any function from the serialised data has not been registered then an
+ * error will be thrown.
+ *
+ * @param {Object} serialised - The serialised pipeline to load.
+ * @returns {lunr.Pipeline}
+ */
+lunr.Pipeline.load = function (serialised) {
+ var pipeline = new lunr.Pipeline
+
+ serialised.forEach(function (fnName) {
+ var fn = lunr.Pipeline.registeredFunctions[fnName]
+
+ if (fn) {
+ pipeline.add(fn)
+ } else {
+ throw new Error('Cannot load unregistered function: ' + fnName)
+ }
+ })
+
+ return pipeline
+}
+
+/**
+ * Adds new functions to the end of the pipeline.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline.
+ */
+lunr.Pipeline.prototype.add = function () {
+ var fns = Array.prototype.slice.call(arguments)
+
+ fns.forEach(function (fn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(fn)
+ this._stack.push(fn)
+ }, this)
+}
+
+/**
+ * Adds a single function after a function that already exists in the
+ * pipeline.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.
+ * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.
+ */
+lunr.Pipeline.prototype.after = function (existingFn, newFn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(newFn)
+
+ var pos = this._stack.indexOf(existingFn)
+ if (pos == -1) {
+ throw new Error('Cannot find existingFn')
+ }
+
+ pos = pos + 1
+ this._stack.splice(pos, 0, newFn)
+}
+
+/**
+ * Adds a single function before a function that already exists in the
+ * pipeline.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline.
+ * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline.
+ */
+lunr.Pipeline.prototype.before = function (existingFn, newFn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(newFn)
+
+ var pos = this._stack.indexOf(existingFn)
+ if (pos == -1) {
+ throw new Error('Cannot find existingFn')
+ }
+
+ this._stack.splice(pos, 0, newFn)
+}
+
+/**
+ * Removes a function from the pipeline.
+ *
+ * @param {lunr.PipelineFunction} fn The function to remove from the pipeline.
+ */
+lunr.Pipeline.prototype.remove = function (fn) {
+ var pos = this._stack.indexOf(fn)
+ if (pos == -1) {
+ return
+ }
+
+ this._stack.splice(pos, 1)
+}
+
+/**
+ * Runs the current list of functions that make up the pipeline against the
+ * passed tokens.
+ *
+ * @param {Array} tokens The tokens to run through the pipeline.
+ * @returns {Array}
+ */
+lunr.Pipeline.prototype.run = function (tokens) {
+ var stackLength = this._stack.length
+
+ for (var i = 0; i < stackLength; i++) {
+ var fn = this._stack[i]
+ var memo = []
+
+ for (var j = 0; j < tokens.length; j++) {
+ var result = fn(tokens[j], j, tokens)
+
+ if (result === null || result === void 0 || result === '') continue
+
+ if (Array.isArray(result)) {
+ for (var k = 0; k < result.length; k++) {
+ memo.push(result[k])
+ }
+ } else {
+ memo.push(result)
+ }
+ }
+
+ tokens = memo
+ }
+
+ return tokens
+}
+
+/**
+ * Convenience method for passing a string through a pipeline and getting
+ * strings out. This method takes care of wrapping the passed string in a
+ * token and mapping the resulting tokens back to strings.
+ *
+ * @param {string} str - The string to pass through the pipeline.
+ * @param {?object} metadata - Optional metadata to associate with the token
+ * passed to the pipeline.
+ * @returns {string[]}
+ */
+lunr.Pipeline.prototype.runString = function (str, metadata) {
+ var token = new lunr.Token (str, metadata)
+
+ return this.run([token]).map(function (t) {
+ return t.toString()
+ })
+}
+
+/**
+ * Resets the pipeline by removing any existing processors.
+ *
+ */
+lunr.Pipeline.prototype.reset = function () {
+ this._stack = []
+}
+
+/**
+ * Returns a representation of the pipeline ready for serialisation.
+ *
+ * Logs a warning if the function has not been registered.
+ *
+ * @returns {Array}
+ */
+lunr.Pipeline.prototype.toJSON = function () {
+ return this._stack.map(function (fn) {
+ lunr.Pipeline.warnIfFunctionNotRegistered(fn)
+
+ return fn.label
+ })
+}
+/*!
+ * lunr.Vector
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * A vector is used to construct the vector space of documents and queries. These
+ * vectors support operations to determine the similarity between two documents or
+ * a document and a query.
+ *
+ * Normally no parameters are required for initializing a vector, but in the case of
+ * loading a previously dumped vector the raw elements can be provided to the constructor.
+ *
+ * For performance reasons vectors are implemented with a flat array, where an elements
+ * index is immediately followed by its value. E.g. [index, value, index, value]. This
+ * allows the underlying array to be as sparse as possible and still offer decent
+ * performance when being used for vector calculations.
+ *
+ * @constructor
+ * @param {Number[]} [elements] - The flat list of element index and element value pairs.
+ */
+lunr.Vector = function (elements) {
+ this._magnitude = 0
+ this.elements = elements || []
+}
+
+
+/**
+ * Calculates the position within the vector to insert a given index.
+ *
+ * This is used internally by insert and upsert. If there are duplicate indexes then
+ * the position is returned as if the value for that index were to be updated, but it
+ * is the callers responsibility to check whether there is a duplicate at that index
+ *
+ * @param {Number} insertIdx - The index at which the element should be inserted.
+ * @returns {Number}
+ */
+lunr.Vector.prototype.positionForIndex = function (index) {
+ // For an empty vector the tuple can be inserted at the beginning
+ if (this.elements.length == 0) {
+ return 0
+ }
+
+ var start = 0,
+ end = this.elements.length / 2,
+ sliceLength = end - start,
+ pivotPoint = Math.floor(sliceLength / 2),
+ pivotIndex = this.elements[pivotPoint * 2]
+
+ while (sliceLength > 1) {
+ if (pivotIndex < index) {
+ start = pivotPoint
+ }
+
+ if (pivotIndex > index) {
+ end = pivotPoint
+ }
+
+ if (pivotIndex == index) {
+ break
+ }
+
+ sliceLength = end - start
+ pivotPoint = start + Math.floor(sliceLength / 2)
+ pivotIndex = this.elements[pivotPoint * 2]
+ }
+
+ if (pivotIndex == index) {
+ return pivotPoint * 2
+ }
+
+ if (pivotIndex > index) {
+ return pivotPoint * 2
+ }
+
+ if (pivotIndex < index) {
+ return (pivotPoint + 1) * 2
+ }
+}
+
+/**
+ * Inserts an element at an index within the vector.
+ *
+ * Does not allow duplicates, will throw an error if there is already an entry
+ * for this index.
+ *
+ * @param {Number} insertIdx - The index at which the element should be inserted.
+ * @param {Number} val - The value to be inserted into the vector.
+ */
+lunr.Vector.prototype.insert = function (insertIdx, val) {
+ this.upsert(insertIdx, val, function () {
+ throw "duplicate index"
+ })
+}
+
+/**
+ * Inserts or updates an existing index within the vector.
+ *
+ * @param {Number} insertIdx - The index at which the element should be inserted.
+ * @param {Number} val - The value to be inserted into the vector.
+ * @param {function} fn - A function that is called for updates, the existing value and the
+ * requested value are passed as arguments
+ */
+lunr.Vector.prototype.upsert = function (insertIdx, val, fn) {
+ this._magnitude = 0
+ var position = this.positionForIndex(insertIdx)
+
+ if (this.elements[position] == insertIdx) {
+ this.elements[position + 1] = fn(this.elements[position + 1], val)
+ } else {
+ this.elements.splice(position, 0, insertIdx, val)
+ }
+}
+
+/**
+ * Calculates the magnitude of this vector.
+ *
+ * @returns {Number}
+ */
+lunr.Vector.prototype.magnitude = function () {
+ if (this._magnitude) return this._magnitude
+
+ var sumOfSquares = 0,
+ elementsLength = this.elements.length
+
+ for (var i = 1; i < elementsLength; i += 2) {
+ var val = this.elements[i]
+ sumOfSquares += val * val
+ }
+
+ return this._magnitude = Math.sqrt(sumOfSquares)
+}
+
+/**
+ * Calculates the dot product of this vector and another vector.
+ *
+ * @param {lunr.Vector} otherVector - The vector to compute the dot product with.
+ * @returns {Number}
+ */
+lunr.Vector.prototype.dot = function (otherVector) {
+ var dotProduct = 0,
+ a = this.elements, b = otherVector.elements,
+ aLen = a.length, bLen = b.length,
+ aVal = 0, bVal = 0,
+ i = 0, j = 0
+
+ while (i < aLen && j < bLen) {
+ aVal = a[i], bVal = b[j]
+ if (aVal < bVal) {
+ i += 2
+ } else if (aVal > bVal) {
+ j += 2
+ } else if (aVal == bVal) {
+ dotProduct += a[i + 1] * b[j + 1]
+ i += 2
+ j += 2
+ }
+ }
+
+ return dotProduct
+}
+
+/**
+ * Calculates the similarity between this vector and another vector.
+ *
+ * @param {lunr.Vector} otherVector - The other vector to calculate the
+ * similarity with.
+ * @returns {Number}
+ */
+lunr.Vector.prototype.similarity = function (otherVector) {
+ return this.dot(otherVector) / this.magnitude() || 0
+}
+
+/**
+ * Converts the vector to an array of the elements within the vector.
+ *
+ * @returns {Number[]}
+ */
+lunr.Vector.prototype.toArray = function () {
+ var output = new Array (this.elements.length / 2)
+
+ for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) {
+ output[j] = this.elements[i]
+ }
+
+ return output
+}
+
+/**
+ * A JSON serializable representation of the vector.
+ *
+ * @returns {Number[]}
+ */
+lunr.Vector.prototype.toJSON = function () {
+ return this.elements
+}
+/* eslint-disable */
+/*!
+ * lunr.stemmer
+ * Copyright (C) 2020 Oliver Nightingale
+ * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt
+ */
+
+/**
+ * lunr.stemmer is an english language stemmer, this is a JavaScript
+ * implementation of the PorterStemmer taken from http://tartarus.org/~martin
+ *
+ * @static
+ * @implements {lunr.PipelineFunction}
+ * @param {lunr.Token} token - The string to stem
+ * @returns {lunr.Token}
+ * @see {@link lunr.Pipeline}
+ * @function
+ */
+lunr.stemmer = (function(){
+ var step2list = {
+ "ational" : "ate",
+ "tional" : "tion",
+ "enci" : "ence",
+ "anci" : "ance",
+ "izer" : "ize",
+ "bli" : "ble",
+ "alli" : "al",
+ "entli" : "ent",
+ "eli" : "e",
+ "ousli" : "ous",
+ "ization" : "ize",
+ "ation" : "ate",
+ "ator" : "ate",
+ "alism" : "al",
+ "iveness" : "ive",
+ "fulness" : "ful",
+ "ousness" : "ous",
+ "aliti" : "al",
+ "iviti" : "ive",
+ "biliti" : "ble",
+ "logi" : "log"
+ },
+
+ step3list = {
+ "icate" : "ic",
+ "ative" : "",
+ "alize" : "al",
+ "iciti" : "ic",
+ "ical" : "ic",
+ "ful" : "",
+ "ness" : ""
+ },
+
+ c = "[^aeiou]", // consonant
+ v = "[aeiouy]", // vowel
+ C = c + "[^aeiouy]*", // consonant sequence
+ V = v + "[aeiou]*", // vowel sequence
+
+ mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0
+ meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1
+ mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1
+ s_v = "^(" + C + ")?" + v; // vowel in stem
+
+ var re_mgr0 = new RegExp(mgr0);
+ var re_mgr1 = new RegExp(mgr1);
+ var re_meq1 = new RegExp(meq1);
+ var re_s_v = new RegExp(s_v);
+
+ var re_1a = /^(.+?)(ss|i)es$/;
+ var re2_1a = /^(.+?)([^s])s$/;
+ var re_1b = /^(.+?)eed$/;
+ var re2_1b = /^(.+?)(ed|ing)$/;
+ var re_1b_2 = /.$/;
+ var re2_1b_2 = /(at|bl|iz)$/;
+ var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$");
+ var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$");
+
+ var re_1c = /^(.+?[^aeiou])y$/;
+ var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
+
+ var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
+
+ var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
+ var re2_4 = /^(.+?)(s|t)(ion)$/;
+
+ var re_5 = /^(.+?)e$/;
+ var re_5_1 = /ll$/;
+ var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$");
+
+ var porterStemmer = function porterStemmer(w) {
+ var stem,
+ suffix,
+ firstch,
+ re,
+ re2,
+ re3,
+ re4;
+
+ if (w.length < 3) { return w; }
+
+ firstch = w.substr(0,1);
+ if (firstch == "y") {
+ w = firstch.toUpperCase() + w.substr(1);
+ }
+
+ // Step 1a
+ re = re_1a
+ re2 = re2_1a;
+
+ if (re.test(w)) { w = w.replace(re,"$1$2"); }
+ else if (re2.test(w)) { w = w.replace(re2,"$1$2"); }
+
+ // Step 1b
+ re = re_1b;
+ re2 = re2_1b;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ re = re_mgr0;
+ if (re.test(fp[1])) {
+ re = re_1b_2;
+ w = w.replace(re,"");
+ }
+ } else if (re2.test(w)) {
+ var fp = re2.exec(w);
+ stem = fp[1];
+ re2 = re_s_v;
+ if (re2.test(stem)) {
+ w = stem;
+ re2 = re2_1b_2;
+ re3 = re3_1b_2;
+ re4 = re4_1b_2;
+ if (re2.test(w)) { w = w + "e"; }
+ else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); }
+ else if (re4.test(w)) { w = w + "e"; }
+ }
+ }
+
+ // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say)
+ re = re_1c;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ w = stem + "i";
+ }
+
+ // Step 2
+ re = re_2;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ suffix = fp[2];
+ re = re_mgr0;
+ if (re.test(stem)) {
+ w = stem + step2list[suffix];
+ }
+ }
+
+ // Step 3
+ re = re_3;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ suffix = fp[2];
+ re = re_mgr0;
+ if (re.test(stem)) {
+ w = stem + step3list[suffix];
+ }
+ }
+
+ // Step 4
+ re = re_4;
+ re2 = re2_4;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = re_mgr1;
+ if (re.test(stem)) {
+ w = stem;
+ }
+ } else if (re2.test(w)) {
+ var fp = re2.exec(w);
+ stem = fp[1] + fp[2];
+ re2 = re_mgr1;
+ if (re2.test(stem)) {
+ w = stem;
+ }
+ }
+
+ // Step 5
+ re = re_5;
+ if (re.test(w)) {
+ var fp = re.exec(w);
+ stem = fp[1];
+ re = re_mgr1;
+ re2 = re_meq1;
+ re3 = re3_5;
+ if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {
+ w = stem;
+ }
+ }
+
+ re = re_5_1;
+ re2 = re_mgr1;
+ if (re.test(w) && re2.test(w)) {
+ re = re_1b_2;
+ w = w.replace(re,"");
+ }
+
+ // and turn initial Y back to y
+
+ if (firstch == "y") {
+ w = firstch.toLowerCase() + w.substr(1);
+ }
+
+ return w;
+ };
+
+ return function (token) {
+ return token.update(porterStemmer);
+ }
+})();
+
+lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer')
+/*!
+ * lunr.stopWordFilter
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * lunr.generateStopWordFilter builds a stopWordFilter function from the provided
+ * list of stop words.
+ *
+ * The built in lunr.stopWordFilter is built using this generator and can be used
+ * to generate custom stopWordFilters for applications or non English languages.
+ *
+ * @function
+ * @param {Array} token The token to pass through the filter
+ * @returns {lunr.PipelineFunction}
+ * @see lunr.Pipeline
+ * @see lunr.stopWordFilter
+ */
+lunr.generateStopWordFilter = function (stopWords) {
+ var words = stopWords.reduce(function (memo, stopWord) {
+ memo[stopWord] = stopWord
+ return memo
+ }, {})
+
+ return function (token) {
+ if (token && words[token.toString()] !== token.toString()) return token
+ }
+}
+
+/**
+ * lunr.stopWordFilter is an English language stop word list filter, any words
+ * contained in the list will not be passed through the filter.
+ *
+ * This is intended to be used in the Pipeline. If the token does not pass the
+ * filter then undefined will be returned.
+ *
+ * @function
+ * @implements {lunr.PipelineFunction}
+ * @params {lunr.Token} token - A token to check for being a stop word.
+ * @returns {lunr.Token}
+ * @see {@link lunr.Pipeline}
+ */
+lunr.stopWordFilter = lunr.generateStopWordFilter([
+ 'a',
+ 'able',
+ 'about',
+ 'across',
+ 'after',
+ 'all',
+ 'almost',
+ 'also',
+ 'am',
+ 'among',
+ 'an',
+ 'and',
+ 'any',
+ 'are',
+ 'as',
+ 'at',
+ 'be',
+ 'because',
+ 'been',
+ 'but',
+ 'by',
+ 'can',
+ 'cannot',
+ 'could',
+ 'dear',
+ 'did',
+ 'do',
+ 'does',
+ 'either',
+ 'else',
+ 'ever',
+ 'every',
+ 'for',
+ 'from',
+ 'get',
+ 'got',
+ 'had',
+ 'has',
+ 'have',
+ 'he',
+ 'her',
+ 'hers',
+ 'him',
+ 'his',
+ 'how',
+ 'however',
+ 'i',
+ 'if',
+ 'in',
+ 'into',
+ 'is',
+ 'it',
+ 'its',
+ 'just',
+ 'least',
+ 'let',
+ 'like',
+ 'likely',
+ 'may',
+ 'me',
+ 'might',
+ 'most',
+ 'must',
+ 'my',
+ 'neither',
+ 'no',
+ 'nor',
+ 'not',
+ 'of',
+ 'off',
+ 'often',
+ 'on',
+ 'only',
+ 'or',
+ 'other',
+ 'our',
+ 'own',
+ 'rather',
+ 'said',
+ 'say',
+ 'says',
+ 'she',
+ 'should',
+ 'since',
+ 'so',
+ 'some',
+ 'than',
+ 'that',
+ 'the',
+ 'their',
+ 'them',
+ 'then',
+ 'there',
+ 'these',
+ 'they',
+ 'this',
+ 'tis',
+ 'to',
+ 'too',
+ 'twas',
+ 'us',
+ 'wants',
+ 'was',
+ 'we',
+ 'were',
+ 'what',
+ 'when',
+ 'where',
+ 'which',
+ 'while',
+ 'who',
+ 'whom',
+ 'why',
+ 'will',
+ 'with',
+ 'would',
+ 'yet',
+ 'you',
+ 'your'
+])
+
+lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter')
+/*!
+ * lunr.trimmer
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * lunr.trimmer is a pipeline function for trimming non word
+ * characters from the beginning and end of tokens before they
+ * enter the index.
+ *
+ * This implementation may not work correctly for non latin
+ * characters and should either be removed or adapted for use
+ * with languages with non-latin characters.
+ *
+ * @static
+ * @implements {lunr.PipelineFunction}
+ * @param {lunr.Token} token The token to pass through the filter
+ * @returns {lunr.Token}
+ * @see lunr.Pipeline
+ */
+lunr.trimmer = function (token) {
+ return token.update(function (s) {
+ return s.replace(/^\W+/, '').replace(/\W+$/, '')
+ })
+}
+
+lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer')
+/*!
+ * lunr.TokenSet
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * A token set is used to store the unique list of all tokens
+ * within an index. Token sets are also used to represent an
+ * incoming query to the index, this query token set and index
+ * token set are then intersected to find which tokens to look
+ * up in the inverted index.
+ *
+ * A token set can hold multiple tokens, as in the case of the
+ * index token set, or it can hold a single token as in the
+ * case of a simple query token set.
+ *
+ * Additionally token sets are used to perform wildcard matching.
+ * Leading, contained and trailing wildcards are supported, and
+ * from this edit distance matching can also be provided.
+ *
+ * Token sets are implemented as a minimal finite state automata,
+ * where both common prefixes and suffixes are shared between tokens.
+ * This helps to reduce the space used for storing the token set.
+ *
+ * @constructor
+ */
+lunr.TokenSet = function () {
+ this.final = false
+ this.edges = {}
+ this.id = lunr.TokenSet._nextId
+ lunr.TokenSet._nextId += 1
+}
+
+/**
+ * Keeps track of the next, auto increment, identifier to assign
+ * to a new tokenSet.
+ *
+ * TokenSets require a unique identifier to be correctly minimised.
+ *
+ * @private
+ */
+lunr.TokenSet._nextId = 1
+
+/**
+ * Creates a TokenSet instance from the given sorted array of words.
+ *
+ * @param {String[]} arr - A sorted array of strings to create the set from.
+ * @returns {lunr.TokenSet}
+ * @throws Will throw an error if the input array is not sorted.
+ */
+lunr.TokenSet.fromArray = function (arr) {
+ var builder = new lunr.TokenSet.Builder
+
+ for (var i = 0, len = arr.length; i < len; i++) {
+ builder.insert(arr[i])
+ }
+
+ builder.finish()
+ return builder.root
+}
+
+/**
+ * Creates a token set from a query clause.
+ *
+ * @private
+ * @param {Object} clause - A single clause from lunr.Query.
+ * @param {string} clause.term - The query clause term.
+ * @param {number} [clause.editDistance] - The optional edit distance for the term.
+ * @returns {lunr.TokenSet}
+ */
+lunr.TokenSet.fromClause = function (clause) {
+ if ('editDistance' in clause) {
+ return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance)
+ } else {
+ return lunr.TokenSet.fromString(clause.term)
+ }
+}
+
+/**
+ * Creates a token set representing a single string with a specified
+ * edit distance.
+ *
+ * Insertions, deletions, substitutions and transpositions are each
+ * treated as an edit distance of 1.
+ *
+ * Increasing the allowed edit distance will have a dramatic impact
+ * on the performance of both creating and intersecting these TokenSets.
+ * It is advised to keep the edit distance less than 3.
+ *
+ * @param {string} str - The string to create the token set from.
+ * @param {number} editDistance - The allowed edit distance to match.
+ * @returns {lunr.Vector}
+ */
+lunr.TokenSet.fromFuzzyString = function (str, editDistance) {
+ var root = new lunr.TokenSet
+
+ var stack = [{
+ node: root,
+ editsRemaining: editDistance,
+ str: str
+ }]
+
+ while (stack.length) {
+ var frame = stack.pop()
+
+ // no edit
+ if (frame.str.length > 0) {
+ var char = frame.str.charAt(0),
+ noEditNode
+
+ if (char in frame.node.edges) {
+ noEditNode = frame.node.edges[char]
+ } else {
+ noEditNode = new lunr.TokenSet
+ frame.node.edges[char] = noEditNode
+ }
+
+ if (frame.str.length == 1) {
+ noEditNode.final = true
+ }
+
+ stack.push({
+ node: noEditNode,
+ editsRemaining: frame.editsRemaining,
+ str: frame.str.slice(1)
+ })
+ }
+
+ if (frame.editsRemaining == 0) {
+ continue
+ }
+
+ // insertion
+ if ("*" in frame.node.edges) {
+ var insertionNode = frame.node.edges["*"]
+ } else {
+ var insertionNode = new lunr.TokenSet
+ frame.node.edges["*"] = insertionNode
+ }
+
+ if (frame.str.length == 0) {
+ insertionNode.final = true
+ }
+
+ stack.push({
+ node: insertionNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: frame.str
+ })
+
+ // deletion
+ // can only do a deletion if we have enough edits remaining
+ // and if there are characters left to delete in the string
+ if (frame.str.length > 1) {
+ stack.push({
+ node: frame.node,
+ editsRemaining: frame.editsRemaining - 1,
+ str: frame.str.slice(1)
+ })
+ }
+
+ // deletion
+ // just removing the last character from the str
+ if (frame.str.length == 1) {
+ frame.node.final = true
+ }
+
+ // substitution
+ // can only do a substitution if we have enough edits remaining
+ // and if there are characters left to substitute
+ if (frame.str.length >= 1) {
+ if ("*" in frame.node.edges) {
+ var substitutionNode = frame.node.edges["*"]
+ } else {
+ var substitutionNode = new lunr.TokenSet
+ frame.node.edges["*"] = substitutionNode
+ }
+
+ if (frame.str.length == 1) {
+ substitutionNode.final = true
+ }
+
+ stack.push({
+ node: substitutionNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: frame.str.slice(1)
+ })
+ }
+
+ // transposition
+ // can only do a transposition if there are edits remaining
+ // and there are enough characters to transpose
+ if (frame.str.length > 1) {
+ var charA = frame.str.charAt(0),
+ charB = frame.str.charAt(1),
+ transposeNode
+
+ if (charB in frame.node.edges) {
+ transposeNode = frame.node.edges[charB]
+ } else {
+ transposeNode = new lunr.TokenSet
+ frame.node.edges[charB] = transposeNode
+ }
+
+ if (frame.str.length == 1) {
+ transposeNode.final = true
+ }
+
+ stack.push({
+ node: transposeNode,
+ editsRemaining: frame.editsRemaining - 1,
+ str: charA + frame.str.slice(2)
+ })
+ }
+ }
+
+ return root
+}
+
+/**
+ * Creates a TokenSet from a string.
+ *
+ * The string may contain one or more wildcard characters (*)
+ * that will allow wildcard matching when intersecting with
+ * another TokenSet.
+ *
+ * @param {string} str - The string to create a TokenSet from.
+ * @returns {lunr.TokenSet}
+ */
+lunr.TokenSet.fromString = function (str) {
+ var node = new lunr.TokenSet,
+ root = node
+
+ /*
+ * Iterates through all characters within the passed string
+ * appending a node for each character.
+ *
+ * When a wildcard character is found then a self
+ * referencing edge is introduced to continually match
+ * any number of any characters.
+ */
+ for (var i = 0, len = str.length; i < len; i++) {
+ var char = str[i],
+ final = (i == len - 1)
+
+ if (char == "*") {
+ node.edges[char] = node
+ node.final = final
+
+ } else {
+ var next = new lunr.TokenSet
+ next.final = final
+
+ node.edges[char] = next
+ node = next
+ }
+ }
+
+ return root
+}
+
+/**
+ * Converts this TokenSet into an array of strings
+ * contained within the TokenSet.
+ *
+ * This is not intended to be used on a TokenSet that
+ * contains wildcards, in these cases the results are
+ * undefined and are likely to cause an infinite loop.
+ *
+ * @returns {string[]}
+ */
+lunr.TokenSet.prototype.toArray = function () {
+ var words = []
+
+ var stack = [{
+ prefix: "",
+ node: this
+ }]
+
+ while (stack.length) {
+ var frame = stack.pop(),
+ edges = Object.keys(frame.node.edges),
+ len = edges.length
+
+ if (frame.node.final) {
+ /* In Safari, at this point the prefix is sometimes corrupted, see:
+ * https://github.com/olivernn/lunr.js/issues/279 Calling any
+ * String.prototype method forces Safari to "cast" this string to what
+ * it's supposed to be, fixing the bug. */
+ frame.prefix.charAt(0)
+ words.push(frame.prefix)
+ }
+
+ for (var i = 0; i < len; i++) {
+ var edge = edges[i]
+
+ stack.push({
+ prefix: frame.prefix.concat(edge),
+ node: frame.node.edges[edge]
+ })
+ }
+ }
+
+ return words
+}
+
+/**
+ * Generates a string representation of a TokenSet.
+ *
+ * This is intended to allow TokenSets to be used as keys
+ * in objects, largely to aid the construction and minimisation
+ * of a TokenSet. As such it is not designed to be a human
+ * friendly representation of the TokenSet.
+ *
+ * @returns {string}
+ */
+lunr.TokenSet.prototype.toString = function () {
+ // NOTE: Using Object.keys here as this.edges is very likely
+ // to enter 'hash-mode' with many keys being added
+ //
+ // avoiding a for-in loop here as it leads to the function
+ // being de-optimised (at least in V8). From some simple
+ // benchmarks the performance is comparable, but allowing
+ // V8 to optimize may mean easy performance wins in the future.
+
+ if (this._str) {
+ return this._str
+ }
+
+ var str = this.final ? '1' : '0',
+ labels = Object.keys(this.edges).sort(),
+ len = labels.length
+
+ for (var i = 0; i < len; i++) {
+ var label = labels[i],
+ node = this.edges[label]
+
+ str = str + label + node.id
+ }
+
+ return str
+}
+
+/**
+ * Returns a new TokenSet that is the intersection of
+ * this TokenSet and the passed TokenSet.
+ *
+ * This intersection will take into account any wildcards
+ * contained within the TokenSet.
+ *
+ * @param {lunr.TokenSet} b - An other TokenSet to intersect with.
+ * @returns {lunr.TokenSet}
+ */
+lunr.TokenSet.prototype.intersect = function (b) {
+ var output = new lunr.TokenSet,
+ frame = undefined
+
+ var stack = [{
+ qNode: b,
+ output: output,
+ node: this
+ }]
+
+ while (stack.length) {
+ frame = stack.pop()
+
+ // NOTE: As with the #toString method, we are using
+ // Object.keys and a for loop instead of a for-in loop
+ // as both of these objects enter 'hash' mode, causing
+ // the function to be de-optimised in V8
+ var qEdges = Object.keys(frame.qNode.edges),
+ qLen = qEdges.length,
+ nEdges = Object.keys(frame.node.edges),
+ nLen = nEdges.length
+
+ for (var q = 0; q < qLen; q++) {
+ var qEdge = qEdges[q]
+
+ for (var n = 0; n < nLen; n++) {
+ var nEdge = nEdges[n]
+
+ if (nEdge == qEdge || qEdge == '*') {
+ var node = frame.node.edges[nEdge],
+ qNode = frame.qNode.edges[qEdge],
+ final = node.final && qNode.final,
+ next = undefined
+
+ if (nEdge in frame.output.edges) {
+ // an edge already exists for this character
+ // no need to create a new node, just set the finality
+ // bit unless this node is already final
+ next = frame.output.edges[nEdge]
+ next.final = next.final || final
+
+ } else {
+ // no edge exists yet, must create one
+ // set the finality bit and insert it
+ // into the output
+ next = new lunr.TokenSet
+ next.final = final
+ frame.output.edges[nEdge] = next
+ }
+
+ stack.push({
+ qNode: qNode,
+ output: next,
+ node: node
+ })
+ }
+ }
+ }
+ }
+
+ return output
+}
+lunr.TokenSet.Builder = function () {
+ this.previousWord = ""
+ this.root = new lunr.TokenSet
+ this.uncheckedNodes = []
+ this.minimizedNodes = {}
+}
+
+lunr.TokenSet.Builder.prototype.insert = function (word) {
+ var node,
+ commonPrefix = 0
+
+ if (word < this.previousWord) {
+ throw new Error ("Out of order word insertion")
+ }
+
+ for (var i = 0; i < word.length && i < this.previousWord.length; i++) {
+ if (word[i] != this.previousWord[i]) break
+ commonPrefix++
+ }
+
+ this.minimize(commonPrefix)
+
+ if (this.uncheckedNodes.length == 0) {
+ node = this.root
+ } else {
+ node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child
+ }
+
+ for (var i = commonPrefix; i < word.length; i++) {
+ var nextNode = new lunr.TokenSet,
+ char = word[i]
+
+ node.edges[char] = nextNode
+
+ this.uncheckedNodes.push({
+ parent: node,
+ char: char,
+ child: nextNode
+ })
+
+ node = nextNode
+ }
+
+ node.final = true
+ this.previousWord = word
+}
+
+lunr.TokenSet.Builder.prototype.finish = function () {
+ this.minimize(0)
+}
+
+lunr.TokenSet.Builder.prototype.minimize = function (downTo) {
+ for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) {
+ var node = this.uncheckedNodes[i],
+ childKey = node.child.toString()
+
+ if (childKey in this.minimizedNodes) {
+ node.parent.edges[node.char] = this.minimizedNodes[childKey]
+ } else {
+ // Cache the key for this node since
+ // we know it can't change anymore
+ node.child._str = childKey
+
+ this.minimizedNodes[childKey] = node.child
+ }
+
+ this.uncheckedNodes.pop()
+ }
+}
+/*!
+ * lunr.Index
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * An index contains the built index of all documents and provides a query interface
+ * to the index.
+ *
+ * Usually instances of lunr.Index will not be created using this constructor, instead
+ * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be
+ * used to load previously built and serialized indexes.
+ *
+ * @constructor
+ * @param {Object} attrs - The attributes of the built search index.
+ * @param {Object} attrs.invertedIndex - An index of term/field to document reference.
+ * @param {Object} attrs.fieldVectors - Field vectors
+ * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens.
+ * @param {string[]} attrs.fields - The names of indexed document fields.
+ * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms.
+ */
+lunr.Index = function (attrs) {
+ this.invertedIndex = attrs.invertedIndex
+ this.fieldVectors = attrs.fieldVectors
+ this.tokenSet = attrs.tokenSet
+ this.fields = attrs.fields
+ this.pipeline = attrs.pipeline
+}
+
+/**
+ * A result contains details of a document matching a search query.
+ * @typedef {Object} lunr.Index~Result
+ * @property {string} ref - The reference of the document this result represents.
+ * @property {number} score - A number between 0 and 1 representing how similar this document is to the query.
+ * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match.
+ */
+
+/**
+ * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple
+ * query language which itself is parsed into an instance of lunr.Query.
+ *
+ * For programmatically building queries it is advised to directly use lunr.Query, the query language
+ * is best used for human entered text rather than program generated text.
+ *
+ * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported
+ * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello'
+ * or 'world', though those that contain both will rank higher in the results.
+ *
+ * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can
+ * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding
+ * wildcards will increase the number of documents that will be found but can also have a negative
+ * impact on query performance, especially with wildcards at the beginning of a term.
+ *
+ * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term
+ * hello in the title field will match this query. Using a field not present in the index will lead
+ * to an error being thrown.
+ *
+ * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term
+ * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported
+ * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2.
+ * Avoid large values for edit distance to improve query performance.
+ *
+ * Each term also supports a presence modifier. By default a term's presence in document is optional, however
+ * this can be changed to either required or prohibited. For a term's presence to be required in a document the
+ * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and
+ * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not
+ * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'.
+ *
+ * To escape special characters the backslash character '\' can be used, this allows searches to include
+ * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead
+ * of attempting to apply a boost of 2 to the search term "foo".
+ *
+ * @typedef {string} lunr.Index~QueryString
+ * @example
Simple single term query
+ * hello
+ * @example
Multiple term query
+ * hello world
+ * @example
term scoped to a field
+ * title:hello
+ * @example
term with a boost of 10
+ * hello^10
+ * @example
term with an edit distance of 2
+ * hello~2
+ * @example
terms with presence modifiers
+ * -foo +bar baz
+ */
+
+/**
+ * Performs a search against the index using lunr query syntax.
+ *
+ * Results will be returned sorted by their score, the most relevant results
+ * will be returned first. For details on how the score is calculated, please see
+ * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}.
+ *
+ * For more programmatic querying use lunr.Index#query.
+ *
+ * @param {lunr.Index~QueryString} queryString - A string containing a lunr query.
+ * @throws {lunr.QueryParseError} If the passed query string cannot be parsed.
+ * @returns {lunr.Index~Result[]}
+ */
+lunr.Index.prototype.search = function (queryString) {
+ return this.query(function (query) {
+ var parser = new lunr.QueryParser(queryString, query)
+ parser.parse()
+ })
+}
+
+/**
+ * A query builder callback provides a query object to be used to express
+ * the query to perform on the index.
+ *
+ * @callback lunr.Index~queryBuilder
+ * @param {lunr.Query} query - The query object to build up.
+ * @this lunr.Query
+ */
+
+/**
+ * Performs a query against the index using the yielded lunr.Query object.
+ *
+ * If performing programmatic queries against the index, this method is preferred
+ * over lunr.Index#search so as to avoid the additional query parsing overhead.
+ *
+ * A query object is yielded to the supplied function which should be used to
+ * express the query to be run against the index.
+ *
+ * Note that although this function takes a callback parameter it is _not_ an
+ * asynchronous operation, the callback is just yielded a query object to be
+ * customized.
+ *
+ * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query.
+ * @returns {lunr.Index~Result[]}
+ */
+lunr.Index.prototype.query = function (fn) {
+ // for each query clause
+ // * process terms
+ // * expand terms from token set
+ // * find matching documents and metadata
+ // * get document vectors
+ // * score documents
+
+ var query = new lunr.Query(this.fields),
+ matchingFields = Object.create(null),
+ queryVectors = Object.create(null),
+ termFieldCache = Object.create(null),
+ requiredMatches = Object.create(null),
+ prohibitedMatches = Object.create(null)
+
+ /*
+ * To support field level boosts a query vector is created per
+ * field. An empty vector is eagerly created to support negated
+ * queries.
+ */
+ for (var i = 0; i < this.fields.length; i++) {
+ queryVectors[this.fields[i]] = new lunr.Vector
+ }
+
+ fn.call(query, query)
+
+ for (var i = 0; i < query.clauses.length; i++) {
+ /*
+ * Unless the pipeline has been disabled for this term, which is
+ * the case for terms with wildcards, we need to pass the clause
+ * term through the search pipeline. A pipeline returns an array
+ * of processed terms. Pipeline functions may expand the passed
+ * term, which means we may end up performing multiple index lookups
+ * for a single query term.
+ */
+ var clause = query.clauses[i],
+ terms = null,
+ clauseMatches = lunr.Set.empty
+
+ if (clause.usePipeline) {
+ terms = this.pipeline.runString(clause.term, {
+ fields: clause.fields
+ })
+ } else {
+ terms = [clause.term]
+ }
+
+ for (var m = 0; m < terms.length; m++) {
+ var term = terms[m]
+
+ /*
+ * Each term returned from the pipeline needs to use the same query
+ * clause object, e.g. the same boost and or edit distance. The
+ * simplest way to do this is to re-use the clause object but mutate
+ * its term property.
+ */
+ clause.term = term
+
+ /*
+ * From the term in the clause we create a token set which will then
+ * be used to intersect the indexes token set to get a list of terms
+ * to lookup in the inverted index
+ */
+ var termTokenSet = lunr.TokenSet.fromClause(clause),
+ expandedTerms = this.tokenSet.intersect(termTokenSet).toArray()
+
+ /*
+ * If a term marked as required does not exist in the tokenSet it is
+ * impossible for the search to return any matches. We set all the field
+ * scoped required matches set to empty and stop examining any further
+ * clauses.
+ */
+ if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) {
+ for (var k = 0; k < clause.fields.length; k++) {
+ var field = clause.fields[k]
+ requiredMatches[field] = lunr.Set.empty
+ }
+
+ break
+ }
+
+ for (var j = 0; j < expandedTerms.length; j++) {
+ /*
+ * For each term get the posting and termIndex, this is required for
+ * building the query vector.
+ */
+ var expandedTerm = expandedTerms[j],
+ posting = this.invertedIndex[expandedTerm],
+ termIndex = posting._index
+
+ for (var k = 0; k < clause.fields.length; k++) {
+ /*
+ * For each field that this query term is scoped by (by default
+ * all fields are in scope) we need to get all the document refs
+ * that have this term in that field.
+ *
+ * The posting is the entry in the invertedIndex for the matching
+ * term from above.
+ */
+ var field = clause.fields[k],
+ fieldPosting = posting[field],
+ matchingDocumentRefs = Object.keys(fieldPosting),
+ termField = expandedTerm + "/" + field,
+ matchingDocumentsSet = new lunr.Set(matchingDocumentRefs)
+
+ /*
+ * if the presence of this term is required ensure that the matching
+ * documents are added to the set of required matches for this clause.
+ *
+ */
+ if (clause.presence == lunr.Query.presence.REQUIRED) {
+ clauseMatches = clauseMatches.union(matchingDocumentsSet)
+
+ if (requiredMatches[field] === undefined) {
+ requiredMatches[field] = lunr.Set.complete
+ }
+ }
+
+ /*
+ * if the presence of this term is prohibited ensure that the matching
+ * documents are added to the set of prohibited matches for this field,
+ * creating that set if it does not yet exist.
+ */
+ if (clause.presence == lunr.Query.presence.PROHIBITED) {
+ if (prohibitedMatches[field] === undefined) {
+ prohibitedMatches[field] = lunr.Set.empty
+ }
+
+ prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet)
+
+ /*
+ * Prohibited matches should not be part of the query vector used for
+ * similarity scoring and no metadata should be extracted so we continue
+ * to the next field
+ */
+ continue
+ }
+
+ /*
+ * The query field vector is populated using the termIndex found for
+ * the term and a unit value with the appropriate boost applied.
+ * Using upsert because there could already be an entry in the vector
+ * for the term we are working with. In that case we just add the scores
+ * together.
+ */
+ queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b })
+
+ /**
+ * If we've already seen this term, field combo then we've already collected
+ * the matching documents and metadata, no need to go through all that again
+ */
+ if (termFieldCache[termField]) {
+ continue
+ }
+
+ for (var l = 0; l < matchingDocumentRefs.length; l++) {
+ /*
+ * All metadata for this term/field/document triple
+ * are then extracted and collected into an instance
+ * of lunr.MatchData ready to be returned in the query
+ * results
+ */
+ var matchingDocumentRef = matchingDocumentRefs[l],
+ matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field),
+ metadata = fieldPosting[matchingDocumentRef],
+ fieldMatch
+
+ if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) {
+ matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata)
+ } else {
+ fieldMatch.add(expandedTerm, field, metadata)
+ }
+
+ }
+
+ termFieldCache[termField] = true
+ }
+ }
+ }
+
+ /**
+ * If the presence was required we need to update the requiredMatches field sets.
+ * We do this after all fields for the term have collected their matches because
+ * the clause terms presence is required in _any_ of the fields not _all_ of the
+ * fields.
+ */
+ if (clause.presence === lunr.Query.presence.REQUIRED) {
+ for (var k = 0; k < clause.fields.length; k++) {
+ var field = clause.fields[k]
+ requiredMatches[field] = requiredMatches[field].intersect(clauseMatches)
+ }
+ }
+ }
+
+ /**
+ * Need to combine the field scoped required and prohibited
+ * matching documents into a global set of required and prohibited
+ * matches
+ */
+ var allRequiredMatches = lunr.Set.complete,
+ allProhibitedMatches = lunr.Set.empty
+
+ for (var i = 0; i < this.fields.length; i++) {
+ var field = this.fields[i]
+
+ if (requiredMatches[field]) {
+ allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field])
+ }
+
+ if (prohibitedMatches[field]) {
+ allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field])
+ }
+ }
+
+ var matchingFieldRefs = Object.keys(matchingFields),
+ results = [],
+ matches = Object.create(null)
+
+ /*
+ * If the query is negated (contains only prohibited terms)
+ * we need to get _all_ fieldRefs currently existing in the
+ * index. This is only done when we know that the query is
+ * entirely prohibited terms to avoid any cost of getting all
+ * fieldRefs unnecessarily.
+ *
+ * Additionally, blank MatchData must be created to correctly
+ * populate the results.
+ */
+ if (query.isNegated()) {
+ matchingFieldRefs = Object.keys(this.fieldVectors)
+
+ for (var i = 0; i < matchingFieldRefs.length; i++) {
+ var matchingFieldRef = matchingFieldRefs[i]
+ var fieldRef = lunr.FieldRef.fromString(matchingFieldRef)
+ matchingFields[matchingFieldRef] = new lunr.MatchData
+ }
+ }
+
+ for (var i = 0; i < matchingFieldRefs.length; i++) {
+ /*
+ * Currently we have document fields that match the query, but we
+ * need to return documents. The matchData and scores are combined
+ * from multiple fields belonging to the same document.
+ *
+ * Scores are calculated by field, using the query vectors created
+ * above, and combined into a final document score using addition.
+ */
+ var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]),
+ docRef = fieldRef.docRef
+
+ if (!allRequiredMatches.contains(docRef)) {
+ continue
+ }
+
+ if (allProhibitedMatches.contains(docRef)) {
+ continue
+ }
+
+ var fieldVector = this.fieldVectors[fieldRef],
+ score = queryVectors[fieldRef.fieldName].similarity(fieldVector),
+ docMatch
+
+ if ((docMatch = matches[docRef]) !== undefined) {
+ docMatch.score += score
+ docMatch.matchData.combine(matchingFields[fieldRef])
+ } else {
+ var match = {
+ ref: docRef,
+ score: score,
+ matchData: matchingFields[fieldRef]
+ }
+ matches[docRef] = match
+ results.push(match)
+ }
+ }
+
+ /*
+ * Sort the results objects by score, highest first.
+ */
+ return results.sort(function (a, b) {
+ return b.score - a.score
+ })
+}
+
+/**
+ * Prepares the index for JSON serialization.
+ *
+ * The schema for this JSON blob will be described in a
+ * separate JSON schema file.
+ *
+ * @returns {Object}
+ */
+lunr.Index.prototype.toJSON = function () {
+ var invertedIndex = Object.keys(this.invertedIndex)
+ .sort()
+ .map(function (term) {
+ return [term, this.invertedIndex[term]]
+ }, this)
+
+ var fieldVectors = Object.keys(this.fieldVectors)
+ .map(function (ref) {
+ return [ref, this.fieldVectors[ref].toJSON()]
+ }, this)
+
+ return {
+ version: lunr.version,
+ fields: this.fields,
+ fieldVectors: fieldVectors,
+ invertedIndex: invertedIndex,
+ pipeline: this.pipeline.toJSON()
+ }
+}
+
+/**
+ * Loads a previously serialized lunr.Index
+ *
+ * @param {Object} serializedIndex - A previously serialized lunr.Index
+ * @returns {lunr.Index}
+ */
+lunr.Index.load = function (serializedIndex) {
+ var attrs = {},
+ fieldVectors = {},
+ serializedVectors = serializedIndex.fieldVectors,
+ invertedIndex = Object.create(null),
+ serializedInvertedIndex = serializedIndex.invertedIndex,
+ tokenSetBuilder = new lunr.TokenSet.Builder,
+ pipeline = lunr.Pipeline.load(serializedIndex.pipeline)
+
+ if (serializedIndex.version != lunr.version) {
+ lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'")
+ }
+
+ for (var i = 0; i < serializedVectors.length; i++) {
+ var tuple = serializedVectors[i],
+ ref = tuple[0],
+ elements = tuple[1]
+
+ fieldVectors[ref] = new lunr.Vector(elements)
+ }
+
+ for (var i = 0; i < serializedInvertedIndex.length; i++) {
+ var tuple = serializedInvertedIndex[i],
+ term = tuple[0],
+ posting = tuple[1]
+
+ tokenSetBuilder.insert(term)
+ invertedIndex[term] = posting
+ }
+
+ tokenSetBuilder.finish()
+
+ attrs.fields = serializedIndex.fields
+
+ attrs.fieldVectors = fieldVectors
+ attrs.invertedIndex = invertedIndex
+ attrs.tokenSet = tokenSetBuilder.root
+ attrs.pipeline = pipeline
+
+ return new lunr.Index(attrs)
+}
+/*!
+ * lunr.Builder
+ * Copyright (C) 2020 Oliver Nightingale
+ */
+
+/**
+ * lunr.Builder performs indexing on a set of documents and
+ * returns instances of lunr.Index ready for querying.
+ *
+ * All configuration of the index is done via the builder, the
+ * fields to index, the document reference, the text processing
+ * pipeline and document scoring parameters are all set on the
+ * builder before indexing.
+ *
+ * @constructor
+ * @property {string} _ref - Internal reference to the document reference field.
+ * @property {string[]} _fields - Internal reference to the document fields to index.
+ * @property {object} invertedIndex - The inverted index maps terms to document fields.
+ * @property {object} documentTermFrequencies - Keeps track of document term frequencies.
+ * @property {object} documentLengths - Keeps track of the length of documents added to the index.
+ * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing.
+ * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing.
+ * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index.
+ * @property {number} documentCount - Keeps track of the total number of documents indexed.
+ * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75.
+ * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2.
+ * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space.
+ * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index.
+ */
+lunr.Builder = function () {
+ this._ref = "id"
+ this._fields = Object.create(null)
+ this._documents = Object.create(null)
+ this.invertedIndex = Object.create(null)
+ this.fieldTermFrequencies = {}
+ this.fieldLengths = {}
+ this.tokenizer = lunr.tokenizer
+ this.pipeline = new lunr.Pipeline
+ this.searchPipeline = new lunr.Pipeline
+ this.documentCount = 0
+ this._b = 0.75
+ this._k1 = 1.2
+ this.termIndex = 0
+ this.metadataWhitelist = []
+}
+
+/**
+ * Sets the document field used as the document reference. Every document must have this field.
+ * The type of this field in the document should be a string, if it is not a string it will be
+ * coerced into a string by calling toString.
+ *
+ * The default ref is 'id'.
+ *
+ * The ref should _not_ be changed during indexing, it should be set before any documents are
+ * added to the index. Changing it during indexing can lead to inconsistent results.
+ *
+ * @param {string} ref - The name of the reference field in the document.
+ */
+lunr.Builder.prototype.ref = function (ref) {
+ this._ref = ref
+}
+
+/**
+ * A function that is used to extract a field from a document.
+ *
+ * Lunr expects a field to be at the top level of a document, if however the field
+ * is deeply nested within a document an extractor function can be used to extract
+ * the right field for indexing.
+ *
+ * @callback fieldExtractor
+ * @param {object} doc - The document being added to the index.
+ * @returns {?(string|object|object[])} obj - The object that will be indexed for this field.
+ * @example
Extracting a nested field
+ * function (doc) { return doc.nested.field }
+ */
+
+/**
+ * Adds a field to the list of document fields that will be indexed. Every document being
+ * indexed should have this field. Null values for this field in indexed documents will
+ * not cause errors but will limit the chance of that document being retrieved by searches.
+ *
+ * All fields should be added before adding documents to the index. Adding fields after
+ * a document has been indexed will have no effect on already indexed documents.
+ *
+ * Fields can be boosted at build time. This allows terms within that field to have more
+ * importance when ranking search results. Use a field boost to specify that matches within
+ * one field are more important than other fields.
+ *
+ * @param {string} fieldName - The name of a field to index in all documents.
+ * @param {object} attributes - Optional attributes associated with this field.
+ * @param {number} [attributes.boost=1] - Boost applied to all terms within this field.
+ * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document.
+ * @throws {RangeError} fieldName cannot contain unsupported characters '/'
+ */
+lunr.Builder.prototype.field = function (fieldName, attributes) {
+ if (/\//.test(fieldName)) {
+ throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'")
+ }
+
+ this._fields[fieldName] = attributes || {}
+}
+
+/**
+ * A parameter to tune the amount of field length normalisation that is applied when
+ * calculating relevance scores. A value of 0 will completely disable any normalisation
+ * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b
+ * will be clamped to the range 0 - 1.
+ *
+ * @param {number} number - The value to set for this tuning parameter.
+ */
+lunr.Builder.prototype.b = function (number) {
+ if (number < 0) {
+ this._b = 0
+ } else if (number > 1) {
+ this._b = 1
+ } else {
+ this._b = number
+ }
+}
+
+/**
+ * A parameter that controls the speed at which a rise in term frequency results in term
+ * frequency saturation. The default value is 1.2. Setting this to a higher value will give
+ * slower saturation levels, a lower value will result in quicker saturation.
+ *
+ * @param {number} number - The value to set for this tuning parameter.
+ */
+lunr.Builder.prototype.k1 = function (number) {
+ this._k1 = number
+}
+
+/**
+ * Adds a document to the index.
+ *
+ * Before adding fields to the index the index should have been fully setup, with the document
+ * ref and all fields to index already having been specified.
+ *
+ * The document must have a field name as specified by the ref (by default this is 'id') and
+ * it should have all fields defined for indexing, though null or undefined values will not
+ * cause errors.
+ *
+ * Entire documents can be boosted at build time. Applying a boost to a document indicates that
+ * this document should rank higher in search results than other documents.
+ *
+ * @param {object} doc - The document to add to the index.
+ * @param {object} attributes - Optional attributes associated with this document.
+ * @param {number} [attributes.boost=1] - Boost applied to all terms within this document.
+ */
+lunr.Builder.prototype.add = function (doc, attributes) {
+ var docRef = doc[this._ref],
+ fields = Object.keys(this._fields)
+
+ this._documents[docRef] = attributes || {}
+ this.documentCount += 1
+
+ for (var i = 0; i < fields.length; i++) {
+ var fieldName = fields[i],
+ extractor = this._fields[fieldName].extractor,
+ field = extractor ? extractor(doc) : doc[fieldName],
+ tokens = this.tokenizer(field, {
+ fields: [fieldName]
+ }),
+ terms = this.pipeline.run(tokens),
+ fieldRef = new lunr.FieldRef (docRef, fieldName),
+ fieldTerms = Object.create(null)
+
+ this.fieldTermFrequencies[fieldRef] = fieldTerms
+ this.fieldLengths[fieldRef] = 0
+
+ // store the length of this field for this document
+ this.fieldLengths[fieldRef] += terms.length
+
+ // calculate term frequencies for this field
+ for (var j = 0; j < terms.length; j++) {
+ var term = terms[j]
+
+ if (fieldTerms[term] == undefined) {
+ fieldTerms[term] = 0
+ }
+
+ fieldTerms[term] += 1
+
+ // add to inverted index
+ // create an initial posting if one doesn't exist
+ if (this.invertedIndex[term] == undefined) {
+ var posting = Object.create(null)
+ posting["_index"] = this.termIndex
+ this.termIndex += 1
+
+ for (var k = 0; k < fields.length; k++) {
+ posting[fields[k]] = Object.create(null)
+ }
+
+ this.invertedIndex[term] = posting
+ }
+
+ // add an entry for this term/fieldName/docRef to the invertedIndex
+ if (this.invertedIndex[term][fieldName][docRef] == undefined) {
+ this.invertedIndex[term][fieldName][docRef] = Object.create(null)
+ }
+
+ // store all whitelisted metadata about this token in the
+ // inverted index
+ for (var l = 0; l < this.metadataWhitelist.length; l++) {
+ var metadataKey = this.metadataWhitelist[l],
+ metadata = term.metadata[metadataKey]
+
+ if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) {
+ this.invertedIndex[term][fieldName][docRef][metadataKey] = []
+ }
+
+ this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata)
+ }
+ }
+
+ }
+}
+
+/**
+ * Calculates the average document length for this index
+ *
+ * @private
+ */
+lunr.Builder.prototype.calculateAverageFieldLengths = function () {
+
+ var fieldRefs = Object.keys(this.fieldLengths),
+ numberOfFields = fieldRefs.length,
+ accumulator = {},
+ documentsWithField = {}
+
+ for (var i = 0; i < numberOfFields; i++) {
+ var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),
+ field = fieldRef.fieldName
+
+ documentsWithField[field] || (documentsWithField[field] = 0)
+ documentsWithField[field] += 1
+
+ accumulator[field] || (accumulator[field] = 0)
+ accumulator[field] += this.fieldLengths[fieldRef]
+ }
+
+ var fields = Object.keys(this._fields)
+
+ for (var i = 0; i < fields.length; i++) {
+ var fieldName = fields[i]
+ accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName]
+ }
+
+ this.averageFieldLength = accumulator
+}
+
+/**
+ * Builds a vector space model of every document using lunr.Vector
+ *
+ * @private
+ */
+lunr.Builder.prototype.createFieldVectors = function () {
+ var fieldVectors = {},
+ fieldRefs = Object.keys(this.fieldTermFrequencies),
+ fieldRefsLength = fieldRefs.length,
+ termIdfCache = Object.create(null)
+
+ for (var i = 0; i < fieldRefsLength; i++) {
+ var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]),
+ fieldName = fieldRef.fieldName,
+ fieldLength = this.fieldLengths[fieldRef],
+ fieldVector = new lunr.Vector,
+ termFrequencies = this.fieldTermFrequencies[fieldRef],
+ terms = Object.keys(termFrequencies),
+ termsLength = terms.length
+
+
+ var fieldBoost = this._fields[fieldName].boost || 1,
+ docBoost = this._documents[fieldRef.docRef].boost || 1
+
+ for (var j = 0; j < termsLength; j++) {
+ var term = terms[j],
+ tf = termFrequencies[term],
+ termIndex = this.invertedIndex[term]._index,
+ idf, score, scoreWithPrecision
+
+ if (termIdfCache[term] === undefined) {
+ idf = lunr.idf(this.invertedIndex[term], this.documentCount)
+ termIdfCache[term] = idf
+ } else {
+ idf = termIdfCache[term]
+ }
+
+ score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf)
+ score *= fieldBoost
+ score *= docBoost
+ scoreWithPrecision = Math.round(score * 1000) / 1000
+ // Converts 1.23456789 to 1.234.
+ // Reducing the precision so that the vectors take up less
+ // space when serialised. Doing it now so that they behave
+ // the same before and after serialisation. Also, this is
+ // the fastest approach to reducing a number's precision in
+ // JavaScript.
+
+ fieldVector.insert(termIndex, scoreWithPrecision)
+ }
+
+ fieldVectors[fieldRef] = fieldVector
+ }
+
+ this.fieldVectors = fieldVectors
+}
+
+/**
+ * Creates a token set of all tokens in the index using lunr.TokenSet
+ *
+ * @private
+ */
+lunr.Builder.prototype.createTokenSet = function () {
+ this.tokenSet = lunr.TokenSet.fromArray(
+ Object.keys(this.invertedIndex).sort()
+ )
+}
+
+/**
+ * Builds the index, creating an instance of lunr.Index.
+ *
+ * This completes the indexing process and should only be called
+ * once all documents have been added to the index.
+ *
+ * @returns {lunr.Index}
+ */
+lunr.Builder.prototype.build = function () {
+ this.calculateAverageFieldLengths()
+ this.createFieldVectors()
+ this.createTokenSet()
+
+ return new lunr.Index({
+ invertedIndex: this.invertedIndex,
+ fieldVectors: this.fieldVectors,
+ tokenSet: this.tokenSet,
+ fields: Object.keys(this._fields),
+ pipeline: this.searchPipeline
+ })
+}
+
+/**
+ * Applies a plugin to the index builder.
+ *
+ * A plugin is a function that is called with the index builder as its context.
+ * Plugins can be used to customise or extend the behaviour of the index
+ * in some way. A plugin is just a function, that encapsulated the custom
+ * behaviour that should be applied when building the index.
+ *
+ * The plugin function will be called with the index builder as its argument, additional
+ * arguments can also be passed when calling use. The function will be called
+ * with the index builder as its context.
+ *
+ * @param {Function} plugin The plugin to apply.
+ */
+lunr.Builder.prototype.use = function (fn) {
+ var args = Array.prototype.slice.call(arguments, 1)
+ args.unshift(this)
+ fn.apply(this, args)
+}
+/**
+ * Contains and collects metadata about a matching document.
+ * A single instance of lunr.MatchData is returned as part of every
+ * lunr.Index~Result.
+ *
+ * @constructor
+ * @param {string} term - The term this match data is associated with
+ * @param {string} field - The field in which the term was found
+ * @param {object} metadata - The metadata recorded about this term in this field
+ * @property {object} metadata - A cloned collection of metadata associated with this document.
+ * @see {@link lunr.Index~Result}
+ */
+lunr.MatchData = function (term, field, metadata) {
+ var clonedMetadata = Object.create(null),
+ metadataKeys = Object.keys(metadata || {})
+
+ // Cloning the metadata to prevent the original
+ // being mutated during match data combination.
+ // Metadata is kept in an array within the inverted
+ // index so cloning the data can be done with
+ // Array#slice
+ for (var i = 0; i < metadataKeys.length; i++) {
+ var key = metadataKeys[i]
+ clonedMetadata[key] = metadata[key].slice()
+ }
+
+ this.metadata = Object.create(null)
+
+ if (term !== undefined) {
+ this.metadata[term] = Object.create(null)
+ this.metadata[term][field] = clonedMetadata
+ }
+}
+
+/**
+ * An instance of lunr.MatchData will be created for every term that matches a
+ * document. However only one instance is required in a lunr.Index~Result. This
+ * method combines metadata from another instance of lunr.MatchData with this
+ * objects metadata.
+ *
+ * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one.
+ * @see {@link lunr.Index~Result}
+ */
+lunr.MatchData.prototype.combine = function (otherMatchData) {
+ var terms = Object.keys(otherMatchData.metadata)
+
+ for (var i = 0; i < terms.length; i++) {
+ var term = terms[i],
+ fields = Object.keys(otherMatchData.metadata[term])
+
+ if (this.metadata[term] == undefined) {
+ this.metadata[term] = Object.create(null)
+ }
+
+ for (var j = 0; j < fields.length; j++) {
+ var field = fields[j],
+ keys = Object.keys(otherMatchData.metadata[term][field])
+
+ if (this.metadata[term][field] == undefined) {
+ this.metadata[term][field] = Object.create(null)
+ }
+
+ for (var k = 0; k < keys.length; k++) {
+ var key = keys[k]
+
+ if (this.metadata[term][field][key] == undefined) {
+ this.metadata[term][field][key] = otherMatchData.metadata[term][field][key]
+ } else {
+ this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key])
+ }
+
+ }
+ }
+ }
+}
+
+/**
+ * Add metadata for a term/field pair to this instance of match data.
+ *
+ * @param {string} term - The term this match data is associated with
+ * @param {string} field - The field in which the term was found
+ * @param {object} metadata - The metadata recorded about this term in this field
+ */
+lunr.MatchData.prototype.add = function (term, field, metadata) {
+ if (!(term in this.metadata)) {
+ this.metadata[term] = Object.create(null)
+ this.metadata[term][field] = metadata
+ return
+ }
+
+ if (!(field in this.metadata[term])) {
+ this.metadata[term][field] = metadata
+ return
+ }
+
+ var metadataKeys = Object.keys(metadata)
+
+ for (var i = 0; i < metadataKeys.length; i++) {
+ var key = metadataKeys[i]
+
+ if (key in this.metadata[term][field]) {
+ this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key])
+ } else {
+ this.metadata[term][field][key] = metadata[key]
+ }
+ }
+}
+/**
+ * A lunr.Query provides a programmatic way of defining queries to be performed
+ * against a {@link lunr.Index}.
+ *
+ * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method
+ * so the query object is pre-initialized with the right index fields.
+ *
+ * @constructor
+ * @property {lunr.Query~Clause[]} clauses - An array of query clauses.
+ * @property {string[]} allFields - An array of all available fields in a lunr.Index.
+ */
+lunr.Query = function (allFields) {
+ this.clauses = []
+ this.allFields = allFields
+}
+
+/**
+ * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause.
+ *
+ * This allows wildcards to be added to the beginning and end of a term without having to manually do any string
+ * concatenation.
+ *
+ * The wildcard constants can be bitwise combined to select both leading and trailing wildcards.
+ *
+ * @constant
+ * @default
+ * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour
+ * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists
+ * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists
+ * @see lunr.Query~Clause
+ * @see lunr.Query#clause
+ * @see lunr.Query#term
+ * @example
+ * query.term('foo', {
+ * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING
+ * })
+ */
+
+lunr.Query.wildcard = new String ("*")
+lunr.Query.wildcard.NONE = 0
+lunr.Query.wildcard.LEADING = 1
+lunr.Query.wildcard.TRAILING = 2
+
+/**
+ * Constants for indicating what kind of presence a term must have in matching documents.
+ *
+ * @constant
+ * @enum {number}
+ * @see lunr.Query~Clause
+ * @see lunr.Query#clause
+ * @see lunr.Query#term
+ * @example
query term with required presence
+ * query.term('foo', { presence: lunr.Query.presence.REQUIRED })
+ */
+lunr.Query.presence = {
+ /**
+ * Term's presence in a document is optional, this is the default value.
+ */
+ OPTIONAL: 1,
+
+ /**
+ * Term's presence in a document is required, documents that do not contain
+ * this term will not be returned.
+ */
+ REQUIRED: 2,
+
+ /**
+ * Term's presence in a document is prohibited, documents that do contain
+ * this term will not be returned.
+ */
+ PROHIBITED: 3
+}
+
+/**
+ * A single clause in a {@link lunr.Query} contains a term and details on how to
+ * match that term against a {@link lunr.Index}.
+ *
+ * @typedef {Object} lunr.Query~Clause
+ * @property {string[]} fields - The fields in an index this clause should be matched against.
+ * @property {number} [boost=1] - Any boost that should be applied when matching this clause.
+ * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be.
+ * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline.
+ * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended.
+ * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents.
+ */
+
+/**
+ * Adds a {@link lunr.Query~Clause} to this query.
+ *
+ * Unless the clause contains the fields to be matched all fields will be matched. In addition
+ * a default boost of 1 is applied to the clause.
+ *
+ * @param {lunr.Query~Clause} clause - The clause to add to this query.
+ * @see lunr.Query~Clause
+ * @returns {lunr.Query}
+ */
+lunr.Query.prototype.clause = function (clause) {
+ if (!('fields' in clause)) {
+ clause.fields = this.allFields
+ }
+
+ if (!('boost' in clause)) {
+ clause.boost = 1
+ }
+
+ if (!('usePipeline' in clause)) {
+ clause.usePipeline = true
+ }
+
+ if (!('wildcard' in clause)) {
+ clause.wildcard = lunr.Query.wildcard.NONE
+ }
+
+ if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) {
+ clause.term = "*" + clause.term
+ }
+
+ if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) {
+ clause.term = "" + clause.term + "*"
+ }
+
+ if (!('presence' in clause)) {
+ clause.presence = lunr.Query.presence.OPTIONAL
+ }
+
+ this.clauses.push(clause)
+
+ return this
+}
+
+/**
+ * A negated query is one in which every clause has a presence of
+ * prohibited. These queries require some special processing to return
+ * the expected results.
+ *
+ * @returns boolean
+ */
+lunr.Query.prototype.isNegated = function () {
+ for (var i = 0; i < this.clauses.length; i++) {
+ if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) {
+ return false
+ }
+ }
+
+ return true
+}
+
+/**
+ * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause}
+ * to the list of clauses that make up this query.
+ *
+ * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion
+ * to a token or token-like string should be done before calling this method.
+ *
+ * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an
+ * array, each term in the array will share the same options.
+ *
+ * @param {object|object[]} term - The term(s) to add to the query.
+ * @param {object} [options] - Any additional properties to add to the query clause.
+ * @returns {lunr.Query}
+ * @see lunr.Query#clause
+ * @see lunr.Query~Clause
+ * @example
adding a single term to a query
+ * query.term("foo")
+ * @example
adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard
';
+ }
+});
+```
+
+### Linkify
+
+`linkify: true` uses [linkify-it](https://github.com/markdown-it/linkify-it). To
+configure linkify-it, access the linkify instance through `md.linkify`:
+
+```js
+md.linkify.set({ fuzzyEmail: false }); // disables converting email to link
+```
+
+
+## API
+
+__[API documentation](https://markdown-it.github.io/markdown-it/)__
+
+If you are going to write plugins, please take a look at
+[Development info](https://github.com/markdown-it/markdown-it/tree/master/docs).
+
+
+## Syntax extensions
+
+Embedded (enabled by default):
+
+- [Tables](https://help.github.com/articles/organizing-information-with-tables/) (GFM)
+- [Strikethrough](https://help.github.com/articles/basic-writing-and-formatting-syntax/#styling-text) (GFM)
+
+Via plugins:
+
+- [subscript](https://github.com/markdown-it/markdown-it-sub)
+- [superscript](https://github.com/markdown-it/markdown-it-sup)
+- [footnote](https://github.com/markdown-it/markdown-it-footnote)
+- [definition list](https://github.com/markdown-it/markdown-it-deflist)
+- [abbreviation](https://github.com/markdown-it/markdown-it-abbr)
+- [emoji](https://github.com/markdown-it/markdown-it-emoji)
+- [custom container](https://github.com/markdown-it/markdown-it-container)
+- [insert](https://github.com/markdown-it/markdown-it-ins)
+- [mark](https://github.com/markdown-it/markdown-it-mark)
+- ... and [others](https://www.npmjs.org/browse/keyword/markdown-it-plugin)
+
+
+### Manage rules
+
+By default all rules are enabled, but can be restricted by options. On plugin
+load all its rules are enabled automatically.
+
+```js
+import markdownit from 'markdown-it'
+
+// Activate/deactivate rules, with currying
+const md = markdownit()
+ .disable(['link', 'image'])
+ .enable(['link'])
+ .enable('image');
+
+// Enable everything
+const md = markdownit({
+ html: true,
+ linkify: true,
+ typographer: true,
+});
+```
+
+You can find all rules in sources:
+
+- [`parser_core.mjs`](lib/parser_core.mjs)
+- [`parser_block.mjs`](lib/parser_block.mjs)
+- [`parser_inline.mjs`](lib/parser_inline.mjs)
+
+
+## Benchmark
+
+Here is the result of readme parse at MB Pro Retina 2013 (2.4 GHz):
+
+```bash
+npm run benchmark-deps
+benchmark/benchmark.mjs readme
+
+Selected samples: (1 of 28)
+ > README
+
+Sample: README.md (7774 bytes)
+ > commonmark-reference x 1,222 ops/sec ±0.96% (97 runs sampled)
+ > current x 743 ops/sec ±0.84% (97 runs sampled)
+ > current-commonmark x 1,568 ops/sec ±0.84% (98 runs sampled)
+ > marked x 1,587 ops/sec ±4.31% (93 runs sampled)
+```
+
+__Note.__ CommonMark version runs with [simplified link normalizers](https://github.com/markdown-it/markdown-it/blob/master/benchmark/implementations/current-commonmark/index.mjs)
+for more "honest" compare. Difference is ≈1.5×.
+
+As you can see, `markdown-it` doesn't pay with speed for its flexibility.
+Slowdown of "full" version caused by additional features not available in
+other implementations.
+
+
+## markdown-it for enterprise
+
+Available as part of the Tidelift Subscription.
+
+The maintainers of `markdown-it` and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-markdown-it?utm_source=npm-markdown-it&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
+
+
+## Authors
+
+- Alex Kocharin [github/rlidwka](https://github.com/rlidwka)
+- Vitaly Puzrin [github/puzrin](https://github.com/puzrin)
+
+_markdown-it_ is the result of the decision of the authors who contributed to
+99% of the _Remarkable_ code to move to a project with the same authorship but
+new leadership (Vitaly and Alex). It's not a fork.
+
+## References / Thanks
+
+Big thanks to [John MacFarlane](https://github.com/jgm) for his work on the
+CommonMark spec and reference implementations. His work saved us a lot of time
+during this project's development.
+
+**Related Links:**
+
+- https://github.com/jgm/CommonMark - reference CommonMark implementations in C & JS,
+ also contains latest spec & online demo.
+- http://talk.commonmark.org - CommonMark forum, good place to collaborate
+ developers' efforts.
+
+**Ports**
+
+- [motion-markdown-it](https://github.com/digitalmoksha/motion-markdown-it) - Ruby/RubyMotion
+- [markdown-it-py](https://github.com/ExecutableBookProject/markdown-it-py)- Python
diff --git a/node_modules/markdown-it/bin/markdown-it.mjs b/node_modules/markdown-it/bin/markdown-it.mjs
new file mode 100755
index 0000000..84626f1
--- /dev/null
+++ b/node_modules/markdown-it/bin/markdown-it.mjs
@@ -0,0 +1,107 @@
+#!/usr/bin/env node
+/* eslint no-console:0 */
+
+import fs from 'node:fs'
+import argparse from 'argparse'
+import markdownit from '../index.mjs'
+
+const cli = new argparse.ArgumentParser({
+ prog: 'markdown-it',
+ add_help: true
+})
+
+cli.add_argument('-v', '--version', {
+ action: 'version',
+ version: JSON.parse(fs.readFileSync(new URL('../package.json', import.meta.url))).version
+})
+
+cli.add_argument('--no-html', {
+ help: 'Disable embedded HTML',
+ action: 'store_true'
+})
+
+cli.add_argument('-l', '--linkify', {
+ help: 'Autolink text',
+ action: 'store_true'
+})
+
+cli.add_argument('-t', '--typographer', {
+ help: 'Enable smartquotes and other typographic replacements',
+ action: 'store_true'
+})
+
+cli.add_argument('--trace', {
+ help: 'Show stack trace on error',
+ action: 'store_true'
+})
+
+cli.add_argument('file', {
+ help: 'File to read',
+ nargs: '?',
+ default: '-'
+})
+
+cli.add_argument('-o', '--output', {
+ help: 'File to write',
+ default: '-'
+})
+
+const options = cli.parse_args()
+
+function readFile (filename, encoding, callback) {
+ if (options.file === '-') {
+ // read from stdin
+ const chunks = []
+
+ process.stdin.on('data', function (chunk) { chunks.push(chunk) })
+
+ process.stdin.on('end', function () {
+ return callback(null, Buffer.concat(chunks).toString(encoding))
+ })
+ } else {
+ fs.readFile(filename, encoding, callback)
+ }
+}
+
+readFile(options.file, 'utf8', function (err, input) {
+ let output
+
+ if (err) {
+ if (err.code === 'ENOENT') {
+ console.error('File not found: ' + options.file)
+ process.exit(2)
+ }
+
+ console.error(
+ (options.trace && err.stack) ||
+ err.message ||
+ String(err))
+
+ process.exit(1)
+ }
+
+ const md = markdownit({
+ html: !options.no_html,
+ xhtmlOut: false,
+ typographer: options.typographer,
+ linkify: options.linkify
+ })
+
+ try {
+ output = md.render(input)
+ } catch (e) {
+ console.error(
+ (options.trace && e.stack) ||
+ e.message ||
+ String(e))
+
+ process.exit(1)
+ }
+
+ if (options.output === '-') {
+ // write to stdout
+ process.stdout.write(output)
+ } else {
+ fs.writeFileSync(options.output, output)
+ }
+})
diff --git a/node_modules/markdown-it/index.mjs b/node_modules/markdown-it/index.mjs
new file mode 100644
index 0000000..f7ba45f
--- /dev/null
+++ b/node_modules/markdown-it/index.mjs
@@ -0,0 +1 @@
+export { default } from './lib/index.mjs'
diff --git a/node_modules/markdown-it/package.json b/node_modules/markdown-it/package.json
new file mode 100644
index 0000000..4d55a3e
--- /dev/null
+++ b/node_modules/markdown-it/package.json
@@ -0,0 +1,92 @@
+{
+ "name": "markdown-it",
+ "version": "14.1.0",
+ "description": "Markdown-it - modern pluggable markdown parser.",
+ "keywords": [
+ "markdown",
+ "parser",
+ "commonmark",
+ "markdown-it",
+ "markdown-it-plugin"
+ ],
+ "repository": "markdown-it/markdown-it",
+ "license": "MIT",
+ "main": "dist/index.cjs.js",
+ "module": "index.mjs",
+ "exports": {
+ ".": {
+ "import": "./index.mjs",
+ "require": "./dist/index.cjs.js"
+ },
+ "./*": {
+ "require": "./*",
+ "import": "./*"
+ }
+ },
+ "bin": {
+ "markdown-it": "bin/markdown-it.mjs"
+ },
+ "scripts": {
+ "lint": "eslint .",
+ "test": "npm run lint && CJS_ONLY=1 npm run build && c8 --exclude dist --exclude test -r text -r html -r lcov mocha && node support/specsplit.mjs",
+ "doc": "node support/build_doc.mjs",
+ "gh-doc": "npm run doc && gh-pages -d apidoc -f",
+ "demo": "npm run lint && node support/build_demo.mjs",
+ "gh-demo": "npm run demo && gh-pages -d demo -f -b master -r git@github.com:markdown-it/markdown-it.github.io.git",
+ "build": "rollup -c support/rollup.config.mjs",
+ "benchmark-deps": "npm install --prefix benchmark/extra/ -g marked@0.3.6 commonmark@0.26.0 markdown-it/markdown-it.git#2.2.1",
+ "specsplit": "support/specsplit.mjs good -o test/fixtures/commonmark/good.txt && support/specsplit.mjs bad -o test/fixtures/commonmark/bad.txt && support/specsplit.mjs",
+ "todo": "grep 'TODO' -n -r ./lib 2>/dev/null",
+ "prepublishOnly": "npm test && npm run build && npm run gh-demo && npm run gh-doc"
+ },
+ "files": [
+ "index.mjs",
+ "lib/",
+ "dist/"
+ ],
+ "dependencies": {
+ "argparse": "^2.0.1",
+ "entities": "^4.4.0",
+ "linkify-it": "^5.0.0",
+ "mdurl": "^2.0.0",
+ "punycode.js": "^2.3.1",
+ "uc.micro": "^2.1.0"
+ },
+ "devDependencies": {
+ "@rollup/plugin-babel": "^6.0.4",
+ "@rollup/plugin-commonjs": "^25.0.7",
+ "@rollup/plugin-node-resolve": "^15.2.3",
+ "@rollup/plugin-terser": "^0.4.4",
+ "ansi": "^0.3.0",
+ "benchmark": "~2.1.0",
+ "c8": "^8.0.1",
+ "chai": "^4.2.0",
+ "eslint": "^8.4.1",
+ "eslint-config-standard": "^17.1.0",
+ "express": "^4.14.0",
+ "gh-pages": "^6.1.0",
+ "highlight.js": "^11.9.0",
+ "jest-worker": "^29.7.0",
+ "markdown-it-abbr": "^2.0.0",
+ "markdown-it-container": "^4.0.0",
+ "markdown-it-deflist": "^3.0.0",
+ "markdown-it-emoji": "^3.0.0",
+ "markdown-it-footnote": "^4.0.0",
+ "markdown-it-for-inline": "^2.0.1",
+ "markdown-it-ins": "^4.0.0",
+ "markdown-it-mark": "^4.0.0",
+ "markdown-it-sub": "^2.0.0",
+ "markdown-it-sup": "^2.0.0",
+ "markdown-it-testgen": "^0.1.3",
+ "mocha": "^10.2.0",
+ "ndoc": "^6.0.0",
+ "needle": "^3.0.0",
+ "rollup": "^4.5.0",
+ "shelljs": "^0.8.4",
+ "supertest": "^6.0.1"
+ },
+ "mocha": {
+ "inline-diffs": true,
+ "timeout": 60000
+ }
+}
diff --git a/node_modules/mdurl/LICENSE b/node_modules/mdurl/LICENSE
new file mode 100644
index 0000000..3b2c7bf
--- /dev/null
+++ b/node_modules/mdurl/LICENSE
@@ -0,0 +1,45 @@
+Copyright (c) 2015 Vitaly Puzrin, Alex Kocharin.
+
+Permission is hereby granted, free of charge, to any person
+obtaining a copy of this software and associated documentation
+files (the "Software"), to deal in the Software without
+restriction, including without limitation the rights to use,
+copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the
+Software is furnished to do so, subject to the following
+conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
+
+--------------------------------------------------------------------------------
+
+.parse() is based on Joyent's node.js `url` code:
+
+Copyright Joyent, Inc. and other Node contributors. All rights reserved.
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to
+deal in the Software without restriction, including without limitation the
+rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+sell copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+IN THE SOFTWARE.
diff --git a/node_modules/mdurl/README.md b/node_modules/mdurl/README.md
new file mode 100644
index 0000000..c7f9e95
--- /dev/null
+++ b/node_modules/mdurl/README.md
@@ -0,0 +1,102 @@
+# mdurl
+
+[](https://github.com/markdown-it/mdurl/actions/workflows/ci.yml)
+[](https://www.npmjs.org/package/mdurl)
+
+> URL utilities for [markdown-it](https://github.com/markdown-it/markdown-it) parser.
+
+
+## API
+
+### .encode(str [, exclude, keepEncoded]) -> String
+
+Percent-encode a string, avoiding double encoding. Don't touch `/a-zA-Z0-9/` +
+excluded chars + `/%[a-fA-F0-9]{2}/` (if not disabled). Broken surrorates are
+replaced with `U+FFFD`.
+
+Params:
+
+- __str__ - input string.
+- __exclude__ - optional, `;/?:@&=+$,-_.!~*'()#`. Additional chars to keep intact
+ (except `/a-zA-Z0-9/`).
+- __keepEncoded__ - optional, `true`. By default it skips already encoded sequences
+ (`/%[a-fA-F0-9]{2}/`). If set to `false`, `%` will be encoded.
+
+
+### encode.defaultChars, encode.componentChars
+
+You can use these constants as second argument to `encode` function.
+
+ - `encode.defaultChars` is the same exclude set as in the standard `encodeURI()` function
+ - `encode.componentChars` is the same exclude set as in the `encodeURIComponent()` function
+
+For example, `encode('something', encode.componentChars, true)` is roughly the equivalent of
+the `encodeURIComponent()` function (except `encode()` doesn't throw).
+
+
+### .decode(str [, exclude]) -> String
+
+Decode percent-encoded string. Invalid percent-encoded sequences (e.g. `%2G`)
+are left as is. Invalid UTF-8 characters are replaced with `U+FFFD`.
+
+
+Params:
+
+- __str__ - input string.
+- __exclude__ - set of characters to leave encoded, optional, `;/?:@&=+$,#`.
+
+
+### decode.defaultChars, decode.componentChars
+
+You can use these constants as second argument to `decode` function.
+
+ - `decode.defaultChars` is the same exclude set as in the standard `decodeURI()` function
+ - `decode.componentChars` is the same exclude set as in the `decodeURIComponent()` function
+
+For example, `decode('something', decode.defaultChars)` has the same behavior as
+`decodeURI('something')` on a correctly encoded input.
+
+
+### .parse(url, slashesDenoteHost) -> urlObs
+
+Parse url string. Similar to node's [url.parse](http://nodejs.org/api/url.html#url_url_parse_urlstr_parsequerystring_slashesdenotehost), but without any
+normalizations and query string parse.
+
+ - __url__ - input url (string)
+ - __slashesDenoteHost__ - if url starts with `//`, expect a hostname after it. Optional, `false`.
+
+Result (hash):
+
+- protocol
+- slashes
+- auth
+- port
+- hostname
+- hash
+- search
+- pathname
+
+Difference with node's `url`:
+
+1. No leading slash in paths, e.g. in `url.parse('http://foo?bar')` pathname is
+ ``, not `/`
+2. Backslashes are not replaced with slashes, so `http:\\example.org\` is
+ treated like a relative path
+3. Trailing colon is treated like a part of the path, i.e. in
+ `http://example.org:foo` pathname is `:foo`
+4. Nothing is URL-encoded in the resulting object, (in joyent/node some chars
+ in auth and paths are encoded)
+5. `url.parse()` does not have `parseQueryString` argument
+6. Removed extraneous result properties: `host`, `path`, `query`, etc.,
+ which can be constructed using other parts of the url.
+
+
+### .format(urlObject)
+
+Format an object previously obtained with `.parse()` function. Similar to node's
+[url.format](http://nodejs.org/api/url.html#url_url_format_urlobj).
+
+
+## License
+
+[MIT](https://github.com/markdown-it/mdurl/blob/master/LICENSE)
diff --git a/node_modules/mdurl/index.mjs b/node_modules/mdurl/index.mjs
new file mode 100644
index 0000000..fd78c37
--- /dev/null
+++ b/node_modules/mdurl/index.mjs
@@ -0,0 +1,11 @@
+import decode from './lib/decode.mjs'
+import encode from './lib/encode.mjs'
+import format from './lib/format.mjs'
+import parse from './lib/parse.mjs'
+
+export {
+ decode,
+ encode,
+ format,
+ parse
+}
diff --git a/node_modules/mdurl/package.json b/node_modules/mdurl/package.json
new file mode 100644
index 0000000..6e89beb
--- /dev/null
+++ b/node_modules/mdurl/package.json
@@ -0,0 +1,37 @@
+{
+ "name": "mdurl",
+ "version": "2.0.0",
+ "description": "URL utilities for markdown-it",
+ "repository": "markdown-it/mdurl",
+ "license": "MIT",
+ "main": "build/index.cjs.js",
+ "module": "index.mjs",
+ "exports": {
+ ".": {
+ "require": "./build/index.cjs.js",
+ "import": "./index.mjs"
+ },
+ "./*": {
+ "require": "./*",
+ "import": "./*"
+ }
+ },
+ "scripts": {
+ "lint": "eslint .",
+ "build": "rollup -c",
+ "test": "npm run lint && npm run build && c8 --exclude build --exclude test -r text -r html -r lcov mocha",
+ "prepublishOnly": "npm run lint && npm run build"
+ },
+ "files": [
+ "index.mjs",
+ "lib/",
+ "build/"
+ ],
+ "devDependencies": {
+ "c8": "^8.0.1",
+ "eslint": "^8.54.0",
+ "eslint-config-standard": "^17.1.0",
+ "mocha": "^10.2.0",
+ "rollup": "^4.6.1"
+ }
+}
diff --git a/node_modules/minimatch/LICENSE b/node_modules/minimatch/LICENSE
new file mode 100644
index 0000000..1493534
--- /dev/null
+++ b/node_modules/minimatch/LICENSE
@@ -0,0 +1,15 @@
+The ISC License
+
+Copyright (c) 2011-2023 Isaac Z. Schlueter and Contributors
+
+Permission to use, copy, modify, and/or distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
+IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
diff --git a/node_modules/minimatch/README.md b/node_modules/minimatch/README.md
new file mode 100644
index 0000000..3c97a02
--- /dev/null
+++ b/node_modules/minimatch/README.md
@@ -0,0 +1,454 @@
+# minimatch
+
+A minimal matching utility.
+
+This is the matching library used internally by npm.
+
+It works by converting glob expressions into JavaScript `RegExp`
+objects.
+
+## Usage
+
+```js
+// hybrid module, load with require() or import
+import { minimatch } from 'minimatch'
+// or:
+const { minimatch } = require('minimatch')
+
+minimatch('bar.foo', '*.foo') // true!
+minimatch('bar.foo', '*.bar') // false!
+minimatch('bar.foo', '*.+(bar|foo)', { debug: true }) // true, and noisy!
+```
+
+## Features
+
+Supports these glob features:
+
+- Brace Expansion
+- Extended glob matching
+- "Globstar" `**` matching
+- [Posix character
+ classes](https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html),
+ like `[[:alpha:]]`, supporting the full range of Unicode
+ characters. For example, `[[:alpha:]]` will match against
+ `'é'`, though `[a-zA-Z]` will not. Collating symbol and set
+ matching is not supported, so `[[=e=]]` will _not_ match `'é'`
+ and `[[.ch.]]` will not match `'ch'` in locales where `ch` is
+ considered a single character.
+
+See:
+
+- `man sh`
+- `man bash` [Pattern
+ Matching](https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html)
+- `man 3 fnmatch`
+- `man 5 gitignore`
+
+## Windows
+
+**Please only use forward-slashes in glob expressions.**
+
+Though windows uses either `/` or `\` as its path separator, only `/`
+characters are used by this glob implementation. You must use
+forward-slashes **only** in glob expressions. Back-slashes in patterns
+will always be interpreted as escape characters, not path separators.
+
+Note that `\` or `/` _will_ be interpreted as path separators in paths on
+Windows, and will match against `/` in glob expressions.
+
+So just always use `/` in patterns.
+
+### UNC Paths
+
+On Windows, UNC paths like `//?/c:/...` or
+`//ComputerName/Share/...` are handled specially.
+
+- Patterns starting with a double-slash followed by some
+ non-slash characters will preserve their double-slash. As a
+ result, a pattern like `//*` will match `//x`, but not `/x`.
+- Patterns staring with `//?/:` will _not_ treat
+ the `?` as a wildcard character. Instead, it will be treated
+ as a normal string.
+- Patterns starting with `//?/:/...` will match
+ file paths starting with `:/...`, and vice versa,
+ as if the `//?/` was not present. This behavior only is
+ present when the drive letters are a case-insensitive match to
+ one another. The remaining portions of the path/pattern are
+ compared case sensitively, unless `nocase:true` is set.
+
+Note that specifying a UNC path using `\` characters as path
+separators is always allowed in the file path argument, but only
+allowed in the pattern argument when `windowsPathsNoEscape: true`
+is set in the options.
+
+## Minimatch Class
+
+Create a minimatch object by instantiating the `minimatch.Minimatch` class.
+
+```javascript
+var Minimatch = require('minimatch').Minimatch
+var mm = new Minimatch(pattern, options)
+```
+
+### Properties
+
+- `pattern` The original pattern the minimatch object represents.
+- `options` The options supplied to the constructor.
+- `set` A 2-dimensional array of regexp or string expressions.
+ Each row in the
+ array corresponds to a brace-expanded pattern. Each item in the row
+ corresponds to a single path-part. For example, the pattern
+ `{a,b/c}/d` would expand to a set of patterns like:
+
+ [ [ a, d ]
+ , [ b, c, d ] ]
+
+ If a portion of the pattern doesn't have any "magic" in it
+ (that is, it's something like `"foo"` rather than `fo*o?`), then it
+ will be left as a string rather than converted to a regular
+ expression.
+
+- `regexp` Created by the `makeRe` method. A single regular expression
+ expressing the entire pattern. This is useful in cases where you wish
+ to use the pattern somewhat like `fnmatch(3)` with `FNM_PATH` enabled.
+- `negate` True if the pattern is negated.
+- `comment` True if the pattern is a comment.
+- `empty` True if the pattern is `""`.
+
+### Methods
+
+- `makeRe()` Generate the `regexp` member if necessary, and return it.
+ Will return `false` if the pattern is invalid.
+- `match(fname)` Return true if the filename matches the pattern, or
+ false otherwise.
+- `matchOne(fileArray, patternArray, partial)` Take a `/`-split
+ filename, and match it against a single row in the `regExpSet`. This
+ method is mainly for internal use, but is exposed so that it can be
+ used by a glob-walker that needs to avoid excessive filesystem calls.
+- `hasMagic()` Returns true if the parsed pattern contains any
+ magic characters. Returns false if all comparator parts are
+ string literals. If the `magicalBraces` option is set on the
+ constructor, then it will consider brace expansions which are
+ not otherwise magical to be magic. If not set, then a pattern
+ like `a{b,c}d` will return `false`, because neither `abd` nor
+ `acd` contain any special glob characters.
+
+ This does **not** mean that the pattern string can be used as a
+ literal filename, as it may contain magic glob characters that
+ are escaped. For example, the pattern `\\*` or `[*]` would not
+ be considered to have magic, as the matching portion parses to
+ the literal string `'*'` and would match a path named `'*'`,
+ not `'\\*'` or `'[*]'`. The `minimatch.unescape()` method may
+ be used to remove escape characters.
+
+All other methods are internal, and will be called as necessary.
+
+### minimatch(path, pattern, options)
+
+Main export. Tests a path against the pattern using the options.
+
+```javascript
+var isJS = minimatch(file, '*.js', { matchBase: true })
+```
+
+### minimatch.filter(pattern, options)
+
+Returns a function that tests its
+supplied argument, suitable for use with `Array.filter`. Example:
+
+```javascript
+var javascripts = fileList.filter(minimatch.filter('*.js', { matchBase: true }))
+```
+
+### minimatch.escape(pattern, options = {})
+
+Escape all magic characters in a glob pattern, so that it will
+only ever match literal strings
+
+If the `windowsPathsNoEscape` option is used, then characters are
+escaped by wrapping in `[]`, because a magic character wrapped in
+a character class can only be satisfied by that exact character.
+
+Slashes (and backslashes in `windowsPathsNoEscape` mode) cannot
+be escaped or unescaped.
+
+### minimatch.unescape(pattern, options = {})
+
+Un-escape a glob string that may contain some escaped characters.
+
+If the `windowsPathsNoEscape` option is used, then square-brace
+escapes are removed, but not backslash escapes. For example, it
+will turn the string `'[*]'` into `*`, but it will not turn
+`'\\*'` into `'*'`, because `\` is a path separator in
+`windowsPathsNoEscape` mode.
+
+When `windowsPathsNoEscape` is not set, then both brace escapes
+and backslash escapes are removed.
+
+Slashes (and backslashes in `windowsPathsNoEscape` mode) cannot
+be escaped or unescaped.
+
+### minimatch.match(list, pattern, options)
+
+Match against the list of
+files, in the style of fnmatch or glob. If nothing is matched, and
+options.nonull is set, then return a list containing the pattern itself.
+
+```javascript
+var javascripts = minimatch.match(fileList, '*.js', { matchBase: true })
+```
+
+### minimatch.makeRe(pattern, options)
+
+Make a regular expression object from the pattern.
+
+## Options
+
+All options are `false` by default.
+
+### debug
+
+Dump a ton of stuff to stderr.
+
+### nobrace
+
+Do not expand `{a,b}` and `{1..3}` brace sets.
+
+### noglobstar
+
+Disable `**` matching against multiple folder names.
+
+### dot
+
+Allow patterns to match filenames starting with a period, even if
+the pattern does not explicitly have a period in that spot.
+
+Note that by default, `a/**/b` will **not** match `a/.d/b`, unless `dot`
+is set.
+
+### noext
+
+Disable "extglob" style patterns like `+(a|b)`.
+
+### nocase
+
+Perform a case-insensitive match.
+
+### nocaseMagicOnly
+
+When used with `{nocase: true}`, create regular expressions that
+are case-insensitive, but leave string match portions untouched.
+Has no effect when used without `{nocase: true}`
+
+Useful when some other form of case-insensitive matching is used,
+or if the original string representation is useful in some other
+way.
+
+### nonull
+
+When a match is not found by `minimatch.match`, return a list containing
+the pattern itself if this option is set. When not set, an empty list
+is returned if there are no matches.
+
+### magicalBraces
+
+This only affects the results of the `Minimatch.hasMagic` method.
+
+If the pattern contains brace expansions, such as `a{b,c}d`, but
+no other magic characters, then the `Minimatch.hasMagic()` method
+will return `false` by default. When this option set, it will
+return `true` for brace expansion as well as other magic glob
+characters.
+
+### matchBase
+
+If set, then patterns without slashes will be matched
+against the basename of the path if it contains slashes. For example,
+`a?b` would match the path `/xyz/123/acb`, but not `/xyz/acb/123`.
+
+### nocomment
+
+Suppress the behavior of treating `#` at the start of a pattern as a
+comment.
+
+### nonegate
+
+Suppress the behavior of treating a leading `!` character as negation.
+
+### flipNegate
+
+Returns from negate expressions the same as if they were not negated.
+(Ie, true on a hit, false on a miss.)
+
+### partial
+
+Compare a partial path to a pattern. As long as the parts of the path that
+are present are not contradicted by the pattern, it will be treated as a
+match. This is useful in applications where you're walking through a
+folder structure, and don't yet have the full path, but want to ensure that
+you do not walk down paths that can never be a match.
+
+For example,
+
+```js
+minimatch('/a/b', '/a/*/c/d', { partial: true }) // true, might be /a/b/c/d
+minimatch('/a/b', '/**/d', { partial: true }) // true, might be /a/b/.../d
+minimatch('/x/y/z', '/a/**/z', { partial: true }) // false, because x !== a
+```
+
+### windowsPathsNoEscape
+
+Use `\\` as a path separator _only_, and _never_ as an escape
+character. If set, all `\\` characters are replaced with `/` in
+the pattern. Note that this makes it **impossible** to match
+against paths containing literal glob pattern characters, but
+allows matching with patterns constructed using `path.join()` and
+`path.resolve()` on Windows platforms, mimicking the (buggy!)
+behavior of earlier versions on Windows. Please use with
+caution, and be mindful of [the caveat about Windows
+paths](#windows).
+
+For legacy reasons, this is also set if
+`options.allowWindowsEscape` is set to the exact value `false`.
+
+### windowsNoMagicRoot
+
+When a pattern starts with a UNC path or drive letter, and in
+`nocase:true` mode, do not convert the root portions of the
+pattern into a case-insensitive regular expression, and instead
+leave them as strings.
+
+This is the default when the platform is `win32` and
+`nocase:true` is set.
+
+### preserveMultipleSlashes
+
+By default, multiple `/` characters (other than the leading `//`
+in a UNC path, see "UNC Paths" above) are treated as a single
+`/`.
+
+That is, a pattern like `a///b` will match the file path `a/b`.
+
+Set `preserveMultipleSlashes: true` to suppress this behavior.
+
+### optimizationLevel
+
+A number indicating the level of optimization that should be done
+to the pattern prior to parsing and using it for matches.
+
+Globstar parts `**` are always converted to `*` when `noglobstar`
+is set, and multiple adjacent `**` parts are converted into a
+single `**` (ie, `a/**/**/b` will be treated as `a/**/b`, as this
+is equivalent in all cases).
+
+- `0` - Make no further changes. In this mode, `.` and `..` are
+ maintained in the pattern, meaning that they must also appear
+ in the same position in the test path string. Eg, a pattern
+ like `a/*/../c` will match the string `a/b/../c` but not the
+ string `a/c`.
+- `1` - (default) Remove cases where a double-dot `..` follows a
+ pattern portion that is not `**`, `.`, `..`, or empty `''`. For
+ example, the pattern `./a/b/../*` is converted to `./a/*`, and
+ so it will match the path string `./a/c`, but not the path
+ string `./a/b/../c`. Dots and empty path portions in the
+ pattern are preserved.
+- `2` (or higher) - Much more aggressive optimizations, suitable
+ for use with file-walking cases:
+
+ - Remove cases where a double-dot `..` follows a pattern
+ portion that is not `**`, `.`, or empty `''`. Remove empty
+ and `.` portions of the pattern, where safe to do so (ie,
+ anywhere other than the last position, the first position, or
+ the second position in a pattern starting with `/`, as this
+ may indicate a UNC path on Windows).
+ - Convert patterns containing `