Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: redis/node-redis
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: [email protected]
Choose a base ref
...
head repository: redis/node-redis
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Jan 13, 2022

  1. update bloom version

    leibale committed Jan 13, 2022
    Copy the full SHA
    5a47129 View commit details
  2. Release bloom@1.0.0

    leibale committed Jan 13, 2022
    Copy the full SHA
    2c4565f View commit details
  3. Release json@1.0.2

    leibale committed Jan 13, 2022
    Copy the full SHA
    5a3120e View commit details
  4. Release search@1.0.2

    leibale committed Jan 13, 2022
    Copy the full SHA
    f91535f View commit details
  5. Release time-series@1.0.1

    leibale committed Jan 13, 2022
    Copy the full SHA
    525142e View commit details
  6. upgrade modules

    leibale committed Jan 13, 2022
    Copy the full SHA
    d5b8ead View commit details
  7. Release redis@4.0.2

    leibale committed Jan 13, 2022
    Copy the full SHA
    95c7dc1 View commit details
  8. fix bloom files

    leibale committed Jan 13, 2022
    Copy the full SHA
    2922245 View commit details
  9. Release bloom@1.0.1

    leibale committed Jan 13, 2022
    Copy the full SHA
    7bcc1a7 View commit details
  10. update readmes

    leibale committed Jan 13, 2022
    Copy the full SHA
    841bd43 View commit details
  11. Add stream examples. (#1830)

    * Adds stream consumer and producer scripts to doc.
    
    * Updated build command example.
    
    * Adds stream producer example.
    
    * Adds basic stream consumer example.
    
    * Added isolated execution.
    
    * Update README.md
    
    * Update stream-consumer.js
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 13, 2022
    Copy the full SHA
    309cdbd View commit details
  12. Update stream-consumer.js

    leibale authored Jan 13, 2022
    Copy the full SHA
    8a40398 View commit details

Commits on Jan 15, 2022

  1. Adds topk example for RedisBloom (#1837)

    * Adds topk example.
    
    * Update topk.js
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 15, 2022
    Copy the full SHA
    e4601b6 View commit details
  2. Adds Bloom Filter example using RedisBloom. (#1835)

    * Adds Bloom Filter example using RedisBloom.
    
    * Update bloom-filter.js
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 15, 2022
    Copy the full SHA
    06cb637 View commit details
  3. Update README.md (#1840)

    * Update README.md
    
    * Update README.md
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    gkorland and leibale authored Jan 15, 2022
    Copy the full SHA
    2ff7084 View commit details

Commits on Jan 19, 2022

  1. Adds Bloom overview README. (#1850)

    * Adds Bloom overview README.
    
    * Update README.md
    
    * Incorporates feedback.
    
    * Update README.md
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 19, 2022
    Copy the full SHA
    1f8993a View commit details
  2. Copy the full SHA
    86c239e View commit details
  3. Adds Cuckoo Filter example. (#1843)

    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 19, 2022
    Copy the full SHA
    b68836c View commit details
  4. Adding a RedisTimeSeries example (#1839)

    * Adds the start of a timeseries example.
    
    * Exports required TimeSeries items.
    
    * Fixed import.
    
    * Added TS.INFO example output.
    
    * Fixed typo.
    
    * Fixed typo.
    
    * Exported aggregation enum.
    
    * Working time series example.
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 19, 2022
    Copy the full SHA
    d602682 View commit details
  5. Add streams XREADGROUP and XACK example. (#1832)

    * Removed stream delete command to allow consumer group example to work.
    
    * Adds stream consumer group example.
    
    * Adds stream consumer group example code.
    
    * Update README.md
    
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    Simon Prickett and leibale authored Jan 19, 2022
    Copy the full SHA
    41f6b00 View commit details

Commits on Jan 20, 2022

  1. Adds timeseries overview. (#1853)

    Simon Prickett authored Jan 20, 2022
    Copy the full SHA
    bd1e500 View commit details
  2. Create dependabot.yml

    leibale authored Jan 20, 2022
    Copy the full SHA
    59c0fd0 View commit details
  3. Bump mocha from 9.1.3 to 9.1.4 (#1857)

    Bumps [mocha](https://github.com/mochajs/mocha) from 9.1.3 to 9.1.4.
    - [Release notes](https://github.com/mochajs/mocha/releases)
    - [Changelog](https://github.com/mochajs/mocha/blob/master/CHANGELOG.md)
    - [Commits](mochajs/mocha@v9.1.3...v9.1.4)
    
    ---
    updated-dependencies:
    - dependency-name: mocha
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Jan 20, 2022
    Copy the full SHA
    47200a5 View commit details
  4. Bump typedoc from 0.22.10 to 0.22.11 (#1860)

    Bumps [typedoc](https://github.com/TypeStrong/TypeDoc) from 0.22.10 to 0.22.11.
    - [Release notes](https://github.com/TypeStrong/TypeDoc/releases)
    - [Changelog](https://github.com/TypeStrong/typedoc/blob/master/CHANGELOG.md)
    - [Commits](TypeStrong/typedoc@v0.22.10...v0.22.11)
    
    ---
    updated-dependencies:
    - dependency-name: typedoc
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Jan 20, 2022
    Copy the full SHA
    3ae275e View commit details
  5. Bump eslint from 8.6.0 to 8.7.0 (#1859)

    Bumps [eslint](https://github.com/eslint/eslint) from 8.6.0 to 8.7.0.
    - [Release notes](https://github.com/eslint/eslint/releases)
    - [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md)
    - [Commits](eslint/eslint@v8.6.0...v8.7.0)
    
    ---
    updated-dependencies:
    - dependency-name: eslint
      dependency-type: direct:development
      update-type: version-update:semver-minor
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Jan 20, 2022
    Copy the full SHA
    7e6d4b5 View commit details
  6. Bump typescript from 4.5.4 to 4.5.5 (#1858)

    Bumps [typescript](https://github.com/Microsoft/TypeScript) from 4.5.4 to 4.5.5.
    - [Release notes](https://github.com/Microsoft/TypeScript/releases)
    - [Commits](https://github.com/Microsoft/TypeScript/commits)
    
    ---
    updated-dependencies:
    - dependency-name: typescript
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Jan 20, 2022
    Copy the full SHA
    415a10c View commit details
  7. Bump @types/node from 17.0.8 to 17.0.10 (#1861)

    Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 17.0.8 to 17.0.10.
    - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
    - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)
    
    ---
    updated-dependencies:
    - dependency-name: "@types/node"
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Jan 20, 2022
    Copy the full SHA
    eb26249 View commit details
  8. Bump release-it from 14.12.1 to 14.12.3 (#1862)

    Bumps [release-it](https://github.com/release-it/release-it) from 14.12.1 to 14.12.3.
    - [Release notes](https://github.com/release-it/release-it/releases)
    - [Changelog](https://github.com/release-it/release-it/blob/master/CHANGELOG.md)
    - [Commits](release-it/release-it@14.12.1...14.12.3)
    
    ---
    updated-dependencies:
    - dependency-name: release-it
      dependency-type: direct:development
      update-type: version-update:semver-patch
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
    dependabot[bot] authored Jan 20, 2022
    Copy the full SHA
    76c137b View commit details
  9. Update dependabot.yml

    leibale authored Jan 20, 2022
    Copy the full SHA
    de16a8d View commit details
  10. Delete dependabot.yml

    leibale authored Jan 20, 2022
    Copy the full SHA
    bac6744 View commit details
  11. upgrade dependencies (#1863)

    leibale authored Jan 20, 2022
    Copy the full SHA
    a229950 View commit details

Commits on Jan 24, 2022

  1. Copy the full SHA
    e66fa6a View commit details
  2. Correct relative links (#1867)

    Fix #1866
    leifjones authored Jan 24, 2022
    Copy the full SHA
    cf120c3 View commit details
  3. Copy the full SHA
    551d204 View commit details
  4. fix #1846 - handle arguments that are not buffers or strings (#1849)

    * fix #1846 - handle arguments that are not buffers or strings
    
    * use toString() instead of throw TypeError
    
    * remove .only and uncomment tests
    leibale authored Jan 24, 2022
    Copy the full SHA
    7ded3dd View commit details
  5. upgrade dependencies

    leibale committed Jan 24, 2022
    Copy the full SHA
    84aebcc View commit details

Commits on Jan 31, 2022

  1. update tls type to be boolean instead of "true" (RedisTlsSocketOption…

    …s) (#1851)
    
    * update tls type to be boolean instead of "true"
    
    * Update socket.ts
    
    * Update socket.ts
    
    * Update socket.ts
    
    Co-authored-by: Matan Yemini <matan@engageli.com>
    Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>
    3 people authored Jan 31, 2022
    Copy the full SHA
    741aff0 View commit details
  2. Copy the full SHA
    21270ba View commit details
  3. Copy the full SHA
    5147521 View commit details
  4. Copy the full SHA
    ac1a61f View commit details
  5. Copy the full SHA
    8160fa7 View commit details
  6. fix #1864 - cluster.quit (#1886)

    leibale authored Jan 31, 2022
    Copy the full SHA
    46b831c View commit details
  7. Graph (#1887)

    * init
    
    * implement graph commands
    
    * add graph to packages table
    
    * fix ts.infoDebug
    
    * fix redisearch tests
    
    * Update INFO_DEBUG.ts
    
    * fix INFO.spec.ts
    
    * test QUERY and SLOWLOG
    
    Co-authored-by: Avital-Fine <avital.fine@redis.com>
    leibale and Avital-Fine authored Jan 31, 2022
    Copy the full SHA
    3547b20 View commit details
  8. upgrade dependencies

    leibale committed Jan 31, 2022
    Copy the full SHA
    d78d25a View commit details
  9. Release graph@1.0.0

    leibale committed Jan 31, 2022
    Copy the full SHA
    a5798e2 View commit details
  10. Release client@1.0.3

    leibale committed Jan 31, 2022
    Copy the full SHA
    287b334 View commit details
  11. lock versions

    leibale committed Jan 31, 2022
    Copy the full SHA
    429c1e0 View commit details
  12. Release redis@4.0.3

    leibale committed Jan 31, 2022
    Copy the full SHA
    16afa7d View commit details

Commits on Feb 3, 2022

  1. Minor formatting fix. (#1890)

    Simon Prickett authored Feb 3, 2022
    Copy the full SHA
    9f0f7f5 View commit details

Commits on Feb 7, 2022

  1. Copy the full SHA
    10da371 View commit details
Showing 1,325 changed files with 84,469 additions and 33,642 deletions.
39 changes: 39 additions & 0 deletions .github/ISSUE_TEMPLATE/BUG-REPORT.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Bug Report
description: Tell us about something that isn't working as expected
labels: [Bug]
body:
- type: textarea
id: description
attributes:
label: Description
description: Please enter a detailed description of your issue. If possible, please provide example code to reproduce the issue.
validations:
required: true
- type: input
id: node-js-version
attributes:
label: Node.js Version
description: Please enter your Node.js version `node --version`
- type: input
id: redis-server-version
attributes:
label: Redis Server Version
description: Please enter your Redis server version ([`INFO server`](https://redis.io/commands/info/))
- type: input
id: node-redis-version
attributes:
label: Node Redis Version
description: Please enter your node redis version `npm ls redis`
- type: input
id: platform
attributes:
label: Platform
description: Please enter the platform you are using e.g. Linux, macOS, Windows
- type: textarea
id: logs
attributes:
label: Logs
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: bash
validations:
required: false
11 changes: 11 additions & 0 deletions .github/ISSUE_TEMPLATE/DOCUMENTATION.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
name: Documentation
description: Any questions or issues relating to the project documentation.
labels: [Documentation]
body:
- type: textarea
id: description
attributes:
label: Description
description: Ask your question or describe your issue here.
validations:
required: true
19 changes: 19 additions & 0 deletions .github/ISSUE_TEMPLATE/FEATURE-REQUEST.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name: Feature Request
description: Suggest an idea for this project
labels: [Feature]
body:
- type: textarea
id: motivation
attributes:
label: Motivation
description: How would Node Redis users benefit from this feature?
validations:
required: true
- type: textarea
id: basic-code-example
attributes:
label: Basic Code Example
description: Provide examples of how you imagine the API for this feature might be implemented. This will be automatically formatted into code, so no need for backticks.
render: JavaScript
validations:
required: false
15 changes: 0 additions & 15 deletions .github/ISSUE_TEMPLATE/bug-report.md

This file was deleted.

7 changes: 0 additions & 7 deletions .github/ISSUE_TEMPLATE/feature-request.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name-template: 'Version $NEXT_PATCH_VERSION'
tag-template: 'v$NEXT_PATCH_VERSION'
name-template: 'json@$NEXT_PATCH_VERSION'
tag-template: 'json@$NEXT_PATCH_VERSION'
autolabeler:
- label: 'chore'
files:
@@ -28,8 +28,15 @@ categories:
- 'bugfix'
- 'bug'
- title: '🧰 Maintenance'
label: 'chore'
label:
- 'chore'
- 'maintenance'
- 'documentation'
- 'docs'

change-template: '- $TITLE (#$NUMBER)'
include-paths:
- 'packages/json'
exclude-labels:
- 'skip-changelog'
template: |
50 changes: 50 additions & 0 deletions .github/release-drafter/bloom-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name-template: 'bloom@$NEXT_PATCH_VERSION'
tag-template: 'bloom@$NEXT_PATCH_VERSION'
autolabeler:
- label: 'chore'
files:
- '*.md'
- '.github/*'
- label: 'bug'
branch:
- '/bug-.+'
- label: 'chore'
branch:
- '/chore-.+'
- label: 'feature'
branch:
- '/feature-.+'
categories:
- title: 'Breaking Changes'
labels:
- 'breakingchange'
- title: '🚀 New Features'
labels:
- 'feature'
- 'enhancement'
- title: '🐛 Bug Fixes'
labels:
- 'fix'
- 'bugfix'
- 'bug'
- title: '🧰 Maintenance'
label:
- 'chore'
- 'maintenance'
- 'documentation'
- 'docs'

change-template: '- $TITLE (#$NUMBER)'
include-paths:
- 'packages/bloom'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
## Contributors
We'd like to thank all the contributors who worked on this release!
$CONTRIBUTORS
50 changes: 50 additions & 0 deletions .github/release-drafter/entraid-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name-template: 'entraid@$NEXT_PATCH_VERSION'
tag-template: 'entraid@$NEXT_PATCH_VERSION'
autolabeler:
- label: 'chore'
files:
- '*.md'
- '.github/*'
- label: 'bug'
branch:
- '/bug-.+'
- label: 'chore'
branch:
- '/chore-.+'
- label: 'feature'
branch:
- '/feature-.+'
categories:
- title: 'Breaking Changes'
labels:
- 'breakingchange'
- title: '🚀 New Features'
labels:
- 'feature'
- 'enhancement'
- title: '🐛 Bug Fixes'
labels:
- 'fix'
- 'bugfix'
- 'bug'
- title: '🧰 Maintenance'
label:
- 'chore'
- 'maintenance'
- 'documentation'
- 'docs'

change-template: '- $TITLE (#$NUMBER)'
include-paths:
- 'packages/entraid'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
## Contributors
We'd like to thank all the contributors who worked on this release!
$CONTRIBUTORS
50 changes: 50 additions & 0 deletions .github/release-drafter/json-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name-template: 'json@$NEXT_PATCH_VERSION'
tag-template: 'json@$NEXT_PATCH_VERSION'
autolabeler:
- label: 'chore'
files:
- '*.md'
- '.github/*'
- label: 'bug'
branch:
- '/bug-.+'
- label: 'chore'
branch:
- '/chore-.+'
- label: 'feature'
branch:
- '/feature-.+'
categories:
- title: 'Breaking Changes'
labels:
- 'breakingchange'
- title: '🚀 New Features'
labels:
- 'feature'
- 'enhancement'
- title: '🐛 Bug Fixes'
labels:
- 'fix'
- 'bugfix'
- 'bug'
- title: '🧰 Maintenance'
label:
- 'chore'
- 'maintenance'
- 'documentation'
- 'docs'

change-template: '- $TITLE (#$NUMBER)'
include-paths:
- 'packages/json'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
## Contributors
We'd like to thank all the contributors who worked on this release!
$CONTRIBUTORS
50 changes: 50 additions & 0 deletions .github/release-drafter/search-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name-template: 'search@$NEXT_PATCH_VERSION'
tag-template: 'search@$NEXT_PATCH_VERSION'
autolabeler:
- label: 'chore'
files:
- '*.md'
- '.github/*'
- label: 'bug'
branch:
- '/bug-.+'
- label: 'chore'
branch:
- '/chore-.+'
- label: 'feature'
branch:
- '/feature-.+'
categories:
- title: 'Breaking Changes'
labels:
- 'breakingchange'
- title: '🚀 New Features'
labels:
- 'feature'
- 'enhancement'
- title: '🐛 Bug Fixes'
labels:
- 'fix'
- 'bugfix'
- 'bug'
- title: '🧰 Maintenance'
label:
- 'chore'
- 'maintenance'
- 'documentation'
- 'docs'

change-template: '- $TITLE (#$NUMBER)'
include-paths:
- 'packages/search'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
## Contributors
We'd like to thank all the contributors who worked on this release!
$CONTRIBUTORS
49 changes: 49 additions & 0 deletions .github/release-drafter/time-series-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
name-template: 'time-series@$NEXT_PATCH_VERSION'
tag-template: 'time-series@$NEXT_PATCH_VERSION'
autolabeler:
- label: 'chore'
files:
- '*.md'
- '.github/*'
- label: 'bug'
branch:
- '/bug-.+'
- label: 'chore'
branch:
- '/chore-.+'
- label: 'feature'
branch:
- '/feature-.+'
categories:
- title: 'Breaking Changes'
labels:
- 'breakingchange'
- title: '🚀 New Features'
labels:
- 'feature'
- 'enhancement'
- title: '🐛 Bug Fixes'
labels:
- 'fix'
- 'bugfix'
- 'bug'
- title: '🧰 Maintenance'
label:
- 'chore'
- 'maintenance'
- 'documentation'
- 'docs'
change-template: '- $TITLE (#$NUMBER)'
include-paths:
- 'packages/time-series'
exclude-labels:
- 'skip-changelog'
template: |
## Changes
$CHANGES
## Contributors
We'd like to thank all the contributors who worked on this release!
$CONTRIBUTORS
74 changes: 74 additions & 0 deletions .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"

on:
push:
branches: [ "master" ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "master" ]
schedule:
- cron: '43 20 * * 1'

jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write

strategy:
fail-fast: false
matrix:
language: [ 'TypeScript' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support

steps:
- name: Checkout repository
uses: actions/checkout@v4

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.

# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality


# Autobuild attempts to build any compiled languages (C/C++, C#, Go, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2

# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun

# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.

# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"
6 changes: 2 additions & 4 deletions .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
@@ -10,15 +10,13 @@ jobs:
documentation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Use Node.js
uses: actions/setup-node@v2.3.0
uses: actions/setup-node@v3
- name: Install Packages
run: npm ci
- name: Build tests tools
run: npm run build:tests-tools
- name: Generate Documentation
run: npm run documentation
- name: Upload
24 changes: 24 additions & 0 deletions .github/workflows/release-drafter-bloom.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Release Drafter

on:
push:
# branches to consider in the event; optional, defaults to all
branches:
- master

jobs:

update_release_draft:

permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
steps:
# Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5
with:
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml
config-name: release-drafter/bloom-config.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
24 changes: 24 additions & 0 deletions .github/workflows/release-drafter-entraid.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Release Drafter

on:
push:
# branches to consider in the event; optional, defaults to all
branches:
- master

jobs:

update_release_draft:

permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
steps:
# Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5
with:
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml
config-name: release-drafter/entraid-config.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Original file line number Diff line number Diff line change
@@ -7,13 +7,18 @@ on:
- master

jobs:

update_release_draft:

permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
steps:
# Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5
with:
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml
config-name: release-drafter-config.yml
config-name: release-drafter/json-config.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
24 changes: 24 additions & 0 deletions .github/workflows/release-drafter-search.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Release Drafter

on:
push:
# branches to consider in the event; optional, defaults to all
branches:
- master

jobs:

update_release_draft:

permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
steps:
# Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5
with:
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml
config-name: release-drafter/search-config.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
24 changes: 24 additions & 0 deletions .github/workflows/release-drafter-time-series.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Release Drafter

on:
push:
# branches to consider in the event; optional, defaults to all
branches:
- master

jobs:

update_release_draft:

permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
steps:
# Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5
with:
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml
config-name: release-drafter/time-series-config.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
51 changes: 51 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Release

on:
workflow_dispatch:
inputs:
version:
description: 'Version to release ("major", "minor", "patch", or "pre*" version; or specify version like "5.3.3")'
required: true
type: string
args:
description: 'Additional arguments to pass to release-it (e.g. "--dry-run"). See docs: https://github.com/release-it/release-it/blob/main/docs/git.md#configuration-options'
required: false
type: string

jobs:
release:
runs-on: ubuntu-latest
permissions:
contents: write
packages: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
ssh-key: ${{ secrets.RELEASE_KEY }}

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
registry-url: 'https://registry.npmjs.org'

- name: Install dependencies
run: npm ci

- name: Configure Git
run: |
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
# Build all packages
- name: Build packages
run: npm run build

# Release using the monorepo approach
- name: Release packages
run: npm run release -- --ci -i ${{ github.event.inputs.version }} ${{ github.event.inputs.args }}
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
95 changes: 95 additions & 0 deletions .github/workflows/stale-issues.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
name: "Stale Issue Management"
on:
schedule:
# Run daily at midnight UTC
- cron: "0 0 * * *"
workflow_dispatch: # Allow manual triggering

env:
# Default stale policy timeframes
DAYS_BEFORE_STALE: 365
DAYS_BEFORE_CLOSE: 30

# Accelerated timeline for needs-information issues
NEEDS_INFO_DAYS_BEFORE_STALE: 30
NEEDS_INFO_DAYS_BEFORE_CLOSE: 7

jobs:
stale:
runs-on: ubuntu-latest
steps:
# First step: Handle regular issues (excluding needs-information)
- name: Mark regular issues as stale
uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}

# Default stale policy
days-before-stale: ${{ env.DAYS_BEFORE_STALE }}
days-before-close: ${{ env.DAYS_BEFORE_CLOSE }}

# Explicit stale label configuration
stale-issue-label: "stale"
stale-pr-label: "stale"

stale-issue-message: |
This issue has been automatically marked as stale due to inactivity.
It will be closed in 30 days if no further activity occurs.
If you believe this issue is still relevant, please add a comment to keep it open.
close-issue-message: |
This issue has been automatically closed due to inactivity.
If you believe this issue is still relevant, please reopen it or create a new issue with updated information.
# Exclude needs-information issues from this step
exempt-issue-labels: 'no-stale,needs-information'

# Remove stale label when issue/PR becomes active again
remove-stale-when-updated: true

# Apply to pull requests with same timeline
days-before-pr-stale: ${{ env.DAYS_BEFORE_STALE }}
days-before-pr-close: ${{ env.DAYS_BEFORE_CLOSE }}

stale-pr-message: |
This pull request has been automatically marked as stale due to inactivity.
It will be closed in 30 days if no further activity occurs.
close-pr-message: |
This pull request has been automatically closed due to inactivity.
If you would like to continue this work, please reopen the PR or create a new one.
# Only exclude no-stale PRs (needs-information PRs follow standard timeline)
exempt-pr-labels: 'no-stale'

# Second step: Handle needs-information issues with accelerated timeline
- name: Mark needs-information issues as stale
uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}

# Accelerated timeline for needs-information
days-before-stale: ${{ env.NEEDS_INFO_DAYS_BEFORE_STALE }}
days-before-close: ${{ env.NEEDS_INFO_DAYS_BEFORE_CLOSE }}

# Explicit stale label configuration
stale-issue-label: "stale"

# Only target ISSUES with needs-information label (not PRs)
only-issue-labels: 'needs-information'

stale-issue-message: |
This issue has been marked as stale because it requires additional information
that has not been provided for 30 days. It will be closed in 7 days if the
requested information is not provided.
close-issue-message: |
This issue has been closed because the requested information was not provided within the specified timeframe.
If you can provide the missing information, please reopen this issue or create a new one.
# Disable PR processing for this step
days-before-pr-stale: -1
days-before-pr-close: -1

# Remove stale label when issue becomes active again
remove-stale-when-updated: true
21 changes: 13 additions & 8 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -5,36 +5,41 @@ on:
branches:
- master
- v4.0
- v5
paths-ignore:
- "**/*.md"
pull_request:
branches:
- master
- v4.0

- v5
paths-ignore:
- "**/*.md"
jobs:
tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
node-version: ['12', '14', '16']
redis-version: ['5', '6.0', '6.2']
node-version: ["18", "20", "22"]
redis-version: ["rs-7.4.0-v1", "8.0.2", "8.2", "8.4-RC1-pre.2"]
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v2.3.0
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
- name: Update npm
run: npm i -g npm
if: ${{ matrix.node-version <= 14 }}
- name: Install Packages
run: npm ci
- name: Build tests tools
run: npm run build:tests-tools
- name: Build
run: npm run build
- name: Run Tests
run: npm run test -- -- --forbid-only --redis-version=${{ matrix.redis-version }}
run: npm run test -ws --if-present -- --forbid-only --redis-version=${{ matrix.redis-version }}
- name: Upload to Codecov
run: |
curl https://keybase.io/codecovsecurity/pgp_keys.asc | gpg --no-default-keyring --keyring trustedkeys.gpg --import
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -7,3 +7,4 @@ node_modules/
.DS_Store
dump.rdb
documentation/
tsconfig.tsbuildinfo
12 changes: 0 additions & 12 deletions .npmignore

This file was deleted.

7 changes: 0 additions & 7 deletions .release-it.json

This file was deleted.

28 changes: 14 additions & 14 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@

- Fix `NOAUTH` error when using authentication & database (#1681)
- Allow to `.quit()` in PubSub mode (#1766)
- Add an option to configurate `name` on a client (#1758)
- Add an option to configure `name` on a client (#1758)
- Lowercase commands (`client.hset`) in `legacyMode`
- Fix PubSub resubscribe (#1764)
- Fix `RedisSocketOptions` type (#1741)
@@ -34,7 +34,7 @@

## v4.0.0 - 24 Nov, 2021

This version is a major change and refactor, adding modern JavaScript capabilities and multiple breaking changes. See the [migration guide](../../docs/v3-to-v4.md) for tips on how to upgrade.
This version is a major change and refactor, adding modern JavaScript capabilities and multiple breaking changes. See the [migration guide](./docs/v3-to-v4.md) for tips on how to upgrade.

### Breaking Changes

@@ -49,10 +49,10 @@ This version is a major change and refactor, adding modern JavaScript capabiliti

- Added support for Promises
- Added built-in TypeScript declaration files enabling code completion
- Added support for [clustering](../../README.md#cluster)
- Added idiomatic arguments and responses to [Redis commands](../../README.md#redis-commands)
- Added full support for [Lua Scripts](../../README.md#lua-scripts)
- Added support for [SCAN iterators](../../README.md#scan-iterator)
- Added support for [clustering](./README.md#cluster)
- Added idiomatic arguments and responses to [Redis commands](./README.md#redis-commands)
- Added full support for [Lua Scripts](./README.md#lua-scripts)
- Added support for [SCAN iterators](./README.md#scan-iterator)
- Added the ability to extend Node Redis with Redis Module commands

## v3.1.2
@@ -450,7 +450,7 @@ No code change

## v2.2.0 - 12 Oct, 2015 - The peregrino falcon

The peregrino falcon is the fasted bird on earth and this is what this release is all about: Increased performance for heavy usage by up to **400%** [sic!] and increased overall performance for any command as well. Please check the benchmarks in the [README.md](README.md) for further details.
The peregrino falcon is the fasted bird on earth and this is what this release is all about: Increased performance for heavy usage by up to **400%** [sic!] and increased overall performance for any command as well. Please check the benchmarks in the [README.md](./README.md) for further details.

Features

@@ -466,7 +466,7 @@ Features
Bugfixes

- Fixed a javascript parser regression introduced in 2.0 that could result in timeouts on high load. ([@BridgeAR](https://github.com/BridgeAR))
- I was not able to write a regression test for this, since the error seems to only occur under heavy load with special conditions. So please have a look for timeouts with the js parser, if you use it and report all issues and switch to the hiredis parser in the meanwhile. If you're able to come up with a reproducable test case, this would be even better :)
- I was not able to write a regression test for this, since the error seems to only occur under heavy load with special conditions. So please have a look for timeouts with the js parser, if you use it and report all issues and switch to the hiredis parser in the meanwhile. If you're able to come up with a reproducible test case, this would be even better :)
- Fixed should_buffer boolean for .exec, .select and .auth commands not being returned and fix a couple special conditions ([@BridgeAR](https://github.com/BridgeAR))

If you do not rely on transactions but want to reduce the RTT you can use .batch from now on. It'll behave just the same as .multi but it does not have any transaction and therefor won't roll back any failed commands.<br>
@@ -518,7 +518,7 @@ Bugfixes:

- Fix argument mutation while using the array notation with the multi constructor (@BridgeAR)
- Fix multi.hmset key not being type converted if used with an object and key not being a string (@BridgeAR)
- Fix parser errors not being catched properly (@BridgeAR)
- Fix parser errors not being caught properly (@BridgeAR)
- Fix a crash that could occur if a redis server does not return the info command as usual #541 (@BridgeAR)
- Explicitly passing undefined as a callback statement will work again. E.g. client.publish('channel', 'message', undefined); (@BridgeAR)

@@ -560,13 +560,13 @@ This is the biggest release that node_redis had since it was released in 2010. A
- Increased coverage by 10% and add a lot of tests to make sure everything works as it should. We now reached 97% :-) (@BridgeAR)
- Remove dead code, clean up and refactor very old chunks (@BridgeAR)
- Don't flush the offline queue if reconnecting (@BridgeAR)
- Emit all errors insteaf of throwing sometimes and sometimes emitting them (@BridgeAR)
- Emit all errors instead of throwing sometimes and sometimes emitting them (@BridgeAR)
- _auth_pass_ passwords are now checked to be a valid password (@jcppman & @BridgeAR)

## Bug fixes:

- Don't kill the app anymore by randomly throwing errors sync instead of emitting them (@BridgeAR)
- Don't catch user errors anymore occuring in callbacks (no try callback anymore & more fixes for the parser) (@BridgeAR)
- Don't catch user errors anymore occurring in callbacks (no try callback anymore & more fixes for the parser) (@BridgeAR)
- Early garbage collection of queued items (@dohse)
- Fix js parser returning errors as strings (@BridgeAR)
- Do not wrap errors into other errors (@BridgeAR)
@@ -588,19 +588,19 @@ This is the biggest release that node_redis had since it was released in 2010. A
## Breaking changes:

1. redis.send_command commands have to be lower case from now on. This does only apply if you use `.send_command` directly instead of the convenient methods like `redis.command`.
2. Error messages have changed quite a bit. If you depend on a specific wording please check your application carfully.
2. Error messages have changed quite a bit. If you depend on a specific wording please check your application carefully.
3. Errors are from now on always either returned if a callback is present or emitted. They won't be thrown (neither sync, nor async).
4. The Multi error handling has changed a lot!

- All errors are from now on errors instead of strings (this only applied to the js parser).
- If an error occurs while queueing the commands an EXECABORT error will be returned including the failed commands as `.errors` property instead of an array with errors.
- If an error occurs while executing the commands and that command has a callback it'll return the error as first parameter (`err, undefined` instead of `null, undefined`).
- All the errors occuring while executing the commands will stay in the result value as error instance (if you used the js parser before they would have been strings). Be aware that the transaction won't be aborted if those error occurr!
- All the errors occurring while executing the commands will stay in the result value as error instance (if you used the js parser before they would have been strings). Be aware that the transaction won't be aborted if those error occur!
- If `multi.exec` does not have a callback and an EXECABORT error occurrs, it'll emit that error instead.

5. If redis can't connect to your redis server it'll give up after a certain point of failures (either max connection attempts or connection timeout exceeded). If that is the case it'll emit an CONNECTION_BROKEN error. You'll have to initiate a new client to try again afterwards.
6. The offline queue is not flushed anymore on a reconnect. It'll stay until node_redis gives up trying to reach the server or until you close the connection.
7. Before this release node_redis catched user errors and threw them async back. This is not the case anymore! No user behavior of what so ever will be tracked or catched.
7. Before this release node_redis caught user errors and threw them async back. This is not the case anymore! No user behavior of what so ever will be tracked or caught.
8. The keyspace of `redis.server_info` (db0...) is from now on an object instead of an string.

NodeRedis also thanks @qdb, @tobek, @cvibhagool, @frewsxcv, @davidbanham, @serv, @vitaliylag, @chrishamant, @GamingCoder and all other contributors that I may have missed for their contributions!
35 changes: 16 additions & 19 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,24 +1,21 @@
MIT License

Copyright (c) 2016-present Node Redis contributors.
Copyright (c) 2022-2023, Redis, inc.

Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
358 changes: 174 additions & 184 deletions README.md

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions benchmark/lib/index.js
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
import { promises as fs } from 'fs';
import { fork } from 'child_process';
import { URL, fileURLToPath } from 'url';
import { once } from 'events';
import { extname } from 'path';
import { promises as fs } from 'node:fs';
import { fork } from 'node:child_process';
import { URL, fileURLToPath } from 'node:url';
import { once } from 'node:events';
import { extname } from 'node:path';

async function getPathChoices() {
const dirents = await fs.readdir(new URL('.', import.meta.url), {
20 changes: 20 additions & 0 deletions benchmark/lib/ping/ioredis-auto-pipeline.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import Redis from 'ioredis';

export default async (host) => {
const client = new Redis({
host,
lazyConnect: true,
enableAutoPipelining: true
});

await client.connect();

return {
benchmark() {
return client.ping();
},
teardown() {
return client.disconnect();
}
}
};
21 changes: 21 additions & 0 deletions benchmark/lib/ping/local-resp2.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import { createClient } from 'redis-local';

export default async (host) => {
const client = createClient({
socket: {
host
},
RESP: 2
});

await client.connect();

return {
benchmark() {
return client.ping();
},
teardown() {
return client.disconnect();
}
};
};
23 changes: 23 additions & 0 deletions benchmark/lib/ping/local-resp3-buffer-proxy.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import { createClient, RESP_TYPES } from 'redis-local';

export default async (host) => {
const client = createClient({
socket: {
host
},
RESP: 3
}).withTypeMapping({
[RESP_TYPES.SIMPLE_STRING]: Buffer
});

await client.connect();

return {
benchmark() {
return client.ping();
},
teardown() {
return client.disconnect();
}
};
};
24 changes: 24 additions & 0 deletions benchmark/lib/ping/local-resp3-buffer.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import { createClient, RESP_TYPES } from 'redis-local';

export default async (host) => {
const client = createClient({
socket: {
host
},
commandOptions: {
[RESP_TYPES.SIMPLE_STRING]: Buffer
},
RESP: 3
});

await client.connect();

return {
benchmark() {
return client.ping();
},
teardown() {
return client.disconnect();
}
};
};
27 changes: 27 additions & 0 deletions benchmark/lib/ping/local-resp3-module-with-flags.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import { createClient } from 'redis-local';
import PING from 'redis-local/dist/lib/commands/PING.js';

export default async (host) => {
const client = createClient({
socket: {
host
},
RESP: 3,
modules: {
module: {
ping: PING.default
}
}
});

await client.connect();

return {
benchmark() {
return client.withTypeMapping({}).module.ping();
},
teardown() {
return client.disconnect();
}
};
};
27 changes: 27 additions & 0 deletions benchmark/lib/ping/local-resp3-module.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import { createClient } from 'redis-local';
import PING from 'redis-local/dist/lib/commands/PING.js';

export default async (host) => {
const client = createClient({
socket: {
host
},
RESP: 3,
modules: {
module: {
ping: PING.default
}
}
});

await client.connect();

return {
benchmark() {
return client.module.ping();
},
teardown() {
return client.disconnect();
}
};
};
21 changes: 21 additions & 0 deletions benchmark/lib/ping/local-resp3.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import { createClient } from 'redis-local';

export default async (host) => {
const client = createClient({
socket: {
host
},
RESP: 3
});

await client.connect();

return {
benchmark() {
return client.ping();
},
teardown() {
return client.disconnect();
}
};
};
4 changes: 2 additions & 2 deletions benchmark/lib/ping/v3.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { createClient } from 'redis-v3';
import { once } from 'events';
import { promisify } from 'util';
import { once } from 'node:events';
import { promisify } from 'node:util';

export default async (host) => {
const client = createClient({ host }),
2 changes: 1 addition & 1 deletion benchmark/lib/ping/v4.js
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import { createClient } from '@node-redis/client';
import { createClient } from 'redis-v4';

export default async (host) => {
const client = createClient({
6 changes: 3 additions & 3 deletions benchmark/lib/runner.js
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
import { basename } from 'path';
import { promises as fs } from 'fs';
import { basename } from 'node:path';
import { promises as fs } from 'node:fs';
import * as hdr from 'hdr-histogram-js';
hdr.initWebAssemblySync();

@@ -71,7 +71,7 @@ const benchmarkStart = process.hrtime.bigint(),
histogram = await run(times),
benchmarkNanoseconds = process.hrtime.bigint() - benchmarkStart,
json = {
timestamp,
// timestamp,
operationsPerSecond: times / Number(benchmarkNanoseconds) * 1_000_000_000,
p0: histogram.getValueAtPercentile(0),
p50: histogram.getValueAtPercentile(50),
2 changes: 1 addition & 1 deletion benchmark/lib/set-get-delete-string/index.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
import { randomBytes } from 'crypto';
import { randomBytes } from 'node:crypto';

const { size } = yargs(hideBin(process.argv))
.option('size', {
4 changes: 2 additions & 2 deletions benchmark/lib/set-get-delete-string/v3.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import { createClient } from 'redis-v3';
import { once } from 'events';
import { promisify } from 'util';
import { once } from 'node:events';
import { promisify } from 'node:util';

export default async (host, { randomString }) => {
const client = createClient({ host }),
2 changes: 1 addition & 1 deletion benchmark/lib/set-get-delete-string/v4.js
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import { createClient } from '@node-redis/client';
import { createClient } from '@redis/client';

export default async (host, { randomString }) => {
const client = createClient({
543 changes: 146 additions & 397 deletions benchmark/package-lock.json

Large diffs are not rendered by default.

13 changes: 7 additions & 6 deletions benchmark/package.json
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
{
"name": "@node-redis/client-benchmark",
"name": "@redis/client-benchmark",
"private": true,
"main": "./lib",
"type": "module",
"scripts": {
"start": "node ."
},
"dependencies": {
"@node-redis/client": "../packages/client",
"hdr-histogram-js": "2.0.1",
"ioredis": "4.28.1",
"redis-v3": "npm:redis@3.1.2",
"yargs": "17.3.0"
"hdr-histogram-js": "3.0.0",
"ioredis": "5",
"redis-local": "file:../packages/client",
"redis-v3": "npm:redis@3",
"redis-v4": "npm:redis@4",
"yargs": "17.7.1"
}
}
8 changes: 5 additions & 3 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
@@ -4,11 +4,13 @@ Nobody has *actually* asked these questions. But, we needed somewhere to put all

## What happens when the network goes down?

When a socket closed unexpectedly, all the commands that were already sent will reject as they might have been executed on the server. The rest will remain queued in memory until a new socket is established. If the client is closed—either by returning an error from [`reconnectStrategy`](./client-configuration.md#reconnect-strategy) or by manually calling `.disconnect()`—they will be rejected.
When a socket closes unexpectedly, all the commands that were already sent will reject as they might have been executed on the server. The rest will remain queued in memory until a new socket is established. If the client is closed—either by returning an error from [`reconnectStrategy`](./client-configuration.md#reconnect-strategy) or by manually calling `.disconnect()`—they will be rejected.

If don't want to queue commands in memory until a new socket is established, set the `disableOfflineQueue` option to `true` in the [client configuration](./client-configuration.md). This will result in those commands being rejected.

## How are commands batched?

Commands are pipelined using [`queueMicrotask`](https://nodejs.org/api/globals.html#globals_queuemicrotask_callback).
Commands are pipelined using [`setImmediate`](https://nodejs.org/api/timers.html#setimmediatecallback-args).

If `socket.write()` returns `false`—meaning that ["all or part of the data was queued in user memory"](https://nodejs.org/api/net.html#net_socket_write_data_encoding_callback:~:text=all%20or%20part%20of%20the%20data%20was%20queued%20in%20user%20memory)—the commands will stack in memory until the [`drain`](https://nodejs.org/api/net.html#net_event_drain) event is fired.

@@ -17,7 +19,7 @@ If `socket.write()` returns `false`—meaning that ["all or part of the data was
Redis has support for [modules](https://redis.io/modules) and running [Lua scripts](../README.md#lua-scripts) within the Redis context. To take advantage of typing within these scenarios, `RedisClient` and `RedisCluster` should be used with [typeof](https://www.typescriptlang.org/docs/handbook/2/typeof-types.html), rather than the base types `RedisClientType` and `RedisClusterType`.

```typescript
import { createClient } from '@node-redis/client';
import { createClient } from '@redis/client';

export const client = createClient();

46 changes: 46 additions & 0 deletions docs/RESP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Mapping RESP types

RESP, which stands for **R**edis **SE**rialization **P**rotocol, is the protocol used by Redis to communicate with clients. This document shows how RESP types can be mapped to JavaScript types. You can learn more about RESP itself in the [offical documentation](https://redis.io/docs/reference/protocol-spec/).

By default, each type is mapped to the first option in the lists below. To change this, configure a [`typeMapping`](.).

## RESP2

- Integer (`:`) => `number`
- Simple String (`+`) => `string | Buffer`
- Blob String (`$`) => `string | Buffer`
- Simple Error (`-`) => `ErrorReply`
- Array (`*`) => `Array`

> NOTE: the first type is the default type
## RESP3

- Null (`_`) => `null`
- Boolean (`#`) => `boolean`
- Number (`:`) => `number | string`
- Big Number (`(`) => `BigInt | string`
- Double (`,`) => `number | string`
- Simple String (`+`) => `string | Buffer`
- Blob String (`$`) => `string | Buffer`
- Verbatim String (`=`) => `string | Buffer | VerbatimString`
- Simple Error (`-`) => `ErrorReply`
- Blob Error (`!`) => `ErrorReply`
- Array (`*`) => `Array`
- Set (`~`) => `Array | Set`
- Map (`%`) => `object | Map | Array`
- Push (`>`) => `Array` => PubSub push/`'push'` event

> NOTE: the first type is the default type
### Map keys and Set members

When decoding a Map to `Map | object` or a Set to `Set`, keys and members of type "Simple String" or "Blob String" will be decoded as `string`s which enables lookups by value, ignoring type mapping. If you want them as `Buffer`s, decode them as `Array`s instead.

### Not Implemented

These parts of RESP3 are not yet implemented in Redis itself (at the time of writing), so are not yet implemented in the Node-Redis client either:

- [Attribute type](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#attribute-type)
- [Streamed strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-strings)
- [Streamed aggregated data types](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-aggregated-data-types)
94 changes: 63 additions & 31 deletions docs/client-configuration.md

Large diffs are not rendered by default.

150 changes: 124 additions & 26 deletions docs/clustering.md
Original file line number Diff line number Diff line change
@@ -4,28 +4,22 @@

Connecting to a cluster is a bit different. Create the client by specifying some (or all) of the nodes in your cluster and then use it like a regular client instance:

```typescript
```javascript
import { createCluster } from 'redis';

(async () => {
const cluster = createCluster({
rootNodes: [
{
url: 'redis://10.0.0.1:30001'
},
{
url: 'redis://10.0.0.2:30002'
}
]
});

cluster.on('error', (err) => console.log('Redis Cluster Error', err));

await cluster.connect();

await cluster.set('key', 'value');
const value = await cluster.get('key');
})();
const cluster = await createCluster({
rootNodes: [{
url: 'redis://10.0.0.1:30001'
}, {
url: 'redis://10.0.0.2:30002'
}]
})
.on('error', err => console.log('Redis Cluster Error', err))
.connect();

await cluster.set('key', 'value');
const value = await cluster.get('key');
await cluster.close();
```

## `createCluster` configuration
@@ -34,23 +28,127 @@ import { createCluster } from 'redis';
| Property | Default | Description |
|------------------------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| rootNodes | | An array of root nodes that are part of the cluster, which will be used to get the cluster topology. Each element in the array is a client configuration object. There is no need to specify every node in the cluster, 3 should be enough to reliably connect and obtain the cluster configuration from the server |
| rootNodes | | An array of root nodes that are part of the cluster, which will be used to get the cluster topology. Each element in the array is a client configuration object. There is no need to specify every node in the cluster: 3 should be enough to reliably connect and obtain the cluster configuration from the server |
| defaults | | The default configuration values for every client in the cluster. Use this for example when specifying an ACL user to connect with |
| useReplicas | `false` | When `true`, distribute load by executing readonly commands (such as `GET`, `GEOSEARCH`, etc.) across all cluster nodes. When `false`, only use master nodes |
| minimizeConnections | `false` | When `true`, `.connect()` will only discover the cluster topology, without actually connecting to all the nodes. Useful for short-term or Pub/Sub-only connections. |
| maxCommandRedirections | `16` | The maximum number of times a command will be redirected due to `MOVED` or `ASK` errors |
| modules | | Object defining which [Redis Modules](../README.md#modules) to include |
| scripts | | Object defining Lua Scripts to use with this client (see [Lua Scripts](../README.md#lua-scripts)) |
| nodeAddressMap | | Defines the [node address mapping](#node-address-map) |
| modules | | Included [Redis Modules](../README.md#packages) |
| scripts | | Script definitions (see [Lua Scripts](./programmability.md#lua-scripts)) |
| functions | | Function definitions (see [Functions](./programmability.md#functions)) |

## Usage

Most redis commands are the same as with individual clients.

### Unsupported Redis Commands

If you want to run commands and/or use arguments that Node Redis doesn't know about (yet!) use `.sendCommand()`.

When clustering, `sendCommand` takes 3 arguments to help with routing to the correct redis node:
* `firstKey`: the key that is being operated on, or `undefined` to route to a random node.
* `isReadOnly`: determines if the command needs to go to the master or may go to a replica.
* `args`: the command and all arguments (including the key), as an array of strings.

```javascript
await cluster.sendCommand("key", false, ["SET", "key", "value", "NX"]); // 'OK'

await cluster.sendCommand("key", true, ["HGETALL", "key"]); // ['key1', 'field1', 'key2', 'field2']
```

## Auth with password and username

Specifying the password in the URL or a root node will only affect the connection to that specific node. In case you want to set the password for all the connections being created from a cluster instance, use the `defaults` option.

```javascript
createCluster({
rootNodes: [{
url: 'redis://10.0.0.1:30001'
}, {
url: 'redis://10.0.0.2:30002'
}],
defaults: {
username: 'username',
password: 'password'
}
});
```

## Node Address Map

A mapping between the addresses in the cluster (see `CLUSTER SHARDS`) and the addresses the client should connect to.
Useful when the cluster is running on a different network to the client.

```javascript
const rootNodes = [{
url: 'external-host-1.io:30001'
}, {
url: 'external-host-2.io:30002'
}];

// Use either a static mapping:
createCluster({
rootNodes,
nodeAddressMap: {
'10.0.0.1:30001': {
host: 'external-host.io',
port: 30001
},
'10.0.0.2:30002': {
host: 'external-host.io',
port: 30002
}
}
});

// or create the mapping dynamically, as a function:
createCluster({
rootNodes,
nodeAddressMap(address) {
const indexOfDash = address.lastIndexOf('-'),
indexOfDot = address.indexOf('.', indexOfDash),
indexOfColons = address.indexOf(':', indexOfDot);

return {
host: `external-host-${address.substring(indexOfDash + 1, indexOfDot)}.io`,
port: Number(address.substring(indexOfColons + 1))
};
}
});
```

> This is a common problem when using ElastiCache. See [Accessing ElastiCache from outside AWS](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html) for more information on that.
### Events

The Node Redis Cluster class extends Node.js’s EventEmitter and emits the following events:

| Name | When | Listener arguments |
| ----------------------- | ---------------------------------------------------------------------------------- | --------------------------------------------------------- |
| `connect` | The cluster has successfully connected and is ready to us | _No arguments_ |
| `disconnect` | The cluster has disconnected | _No arguments_ |
| `error` | The cluster has errored | `(error: Error)` |
| `node-ready` | A cluster node is ready to establish a connection | `(node: { host: string, port: number })` |
| `node-connect` | A cluster node has connected | `(node: { host: string, port: number })` |
| `node-reconnecting` | A cluster node is attempting to reconnect after an error | `(node: { host: string, port: number })` |
| `node-disconnect` | A cluster node has disconnected | `(node: { host: string, port: number })` |
| `node-error` | A cluster node has has errored (usually during TCP connection) | `(error: Error, node: { host: string, port: number })` |

> :warning: You **MUST** listen to `error` events. If a cluster doesn't have at least one `error` listener registered and
> an `error` occurs, that error will be thrown and the Node.js process will exit. See the [ > `EventEmitter` docs](https://nodejs.org/api/events.html#events_error_events) for more details.
## Command Routing

### Commands that operate on Redis Keys

Commands such as `GET`, `SET`, etc. will be routed by the first key, for instance `MGET 1 2 3` will be routed by the key `1`.
Commands such as `GET`, `SET`, etc. are routed by the first key specified. For example `MGET 1 2 3` will be routed by the key `1`.

### [Server Commands](https://redis.io/commands#server)

Admin commands such as `MEMORY STATS`, `FLUSHALL`, etc. are not attached to the cluster, and should be executed on a specific node using `.getSlot()` or `.getAllMasters()`.
Admin commands such as `MEMORY STATS`, `FLUSHALL`, etc. are not attached to the cluster, and must be executed on a specific node via `.getSlotMaster()`.

### "Forwarded Commands"

Some commands (e.g. `PUBLISH`) are forwarded to other cluster nodes by the Redis server. The client will send these commands to a random node in order to spread the load across the cluster.
Certain commands (e.g. `PUBLISH`) are forwarded to other cluster nodes by the Redis server. The client sends these commands to a random node in order to spread the load across the cluster.

81 changes: 81 additions & 0 deletions docs/command-options.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Command Options

> :warning: The command options API in v5 has breaking changes from the previous version. For more details, refer to the [v4-to-v5 guide](./v4-to-v5.md#command-options).
Command Options are used to create "proxy clients" that change the behavior of executed commands. See the sections below for details.

## Type Mapping

Some [RESP types](./RESP.md) can be mapped to more than one JavaScript type. For example, "Blob String" can be mapped to `string` or `Buffer`. You can override the default type mapping using the `withTypeMapping` function:

```javascript
await client.get('key'); // `string | null`

const proxyClient = client.withTypeMapping({
[TYPES.BLOB_STRING]: Buffer
});

await proxyClient.get('key'); // `Buffer | null`
```

See [RESP](./RESP.md) for a full list of types.

## Abort Signal

The client [batches commands](./FAQ.md#how-are-commands-batched) before sending them to Redis. Commands that haven't been written to the socket yet can be aborted using the [`AbortSignal`](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal) API:

```javascript
const controller = new AbortController(),
client = client.withAbortSignal(controller.signal);

try {
const promise = client.get('key');
controller.abort();
await promise;
} catch (err) {
// AbortError
}
```


## Timeout

This option is similar to the Abort Signal one, but provides an easier way to set timeout for commands. Again, this applies to commands that haven't been written to the socket yet.

```javascript
const client = createClient({
commandOptions: {
timeout: 1000
}
})
```

## ASAP

Commands that are executed in the "asap" mode are added to the beginning of the "to sent" queue.

```javascript
const asapClient = client.asap();
await asapClient.ping();
```

## `withCommandOptions`

You can set all of the above command options in a single call with the `withCommandOptions` function:

```javascript
client.withCommandOptions({
typeMapping: ...,
abortSignal: ...,
asap: ...
});
```

If any of the above options are omitted, the default value will be used. For example, the following client would **not** be in ASAP mode:

```javascript
client.asap().withCommandOptions({
typeMapping: ...,
abortSignal: ...
});
```
67 changes: 0 additions & 67 deletions docs/isolated-execution.md

This file was deleted.

74 changes: 74 additions & 0 deletions docs/pool.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# `RedisClientPool`

Sometimes you want to run your commands on an exclusive connection. There are a few reasons to do this:

- You want to run a blocking command that will take over the connection, such as `BLPOP` or `BLMOVE`.
- You're using [transactions](https://redis.io/docs/interact/transactions/) and need to `WATCH` a key or keys for changes.
- Some more...

For those use cases you'll need to create a connection pool.

## Creating a pool

You can create a pool using the `createClientPool` function:

```javascript
import { createClientPool } from 'redis';

const pool = await createClientPool()
.on('error', err => console.error('Redis Client Pool Error', err));
```

the function accepts two arguments, the client configuration (see [here](./client-configuration.md) for more details), and the pool configuration:

| Property | Default | Description |
|----------------|---------|--------------------------------------------------------------------------------------------------------------------------------|
| minimum | 1 | The minimum clients the pool should hold to. The pool won't close clients if the pool size is less than the minimum. |
| maximum | 100 | The maximum clients the pool will have at once. The pool won't create any more resources and queue requests in memory. |
| acquireTimeout | 3000 | The maximum time (in ms) a task can wait in the queue. The pool will reject the task with `TimeoutError` in case of a timeout. |
| cleanupDelay | 3000 | The time to wait before cleaning up unused clients. |

You can also create a pool from a client (reusing it's configuration):
```javascript
const pool = await client.createPool()
.on('error', err => console.error('Redis Client Pool Error', err));
```

## The Simple Scenario

All the client APIs are exposed on the pool instance directly, and will execute the commands using one of the available clients.

```javascript
await pool.sendCommand(['PING']); // 'PONG'
await client.ping(); // 'PONG'
await client.withTypeMapping({
[RESP_TYPES.SIMPLE_STRING]: Buffer
}).ping(); // Buffer
```

## Transactions

Things get a little more complex with transactions. Here we are `.watch()`ing some keys. If the keys change during the transaction, a `WatchError` is thrown when `.exec()` is called:

```javascript
try {
await pool.execute(async client => {
await client.watch('key');

const multi = client.multi()
.ping()
.get('key');

if (Math.random() > 0.5) {
await client.watch('another-key');
multi.set('another-key', await client.get('another-key') / 2);
}

return multi.exec();
});
} catch (err) {
if (err instanceof WatchError) {
// the transaction aborted
}
}
```
85 changes: 85 additions & 0 deletions docs/programmability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# [Programmability](https://redis.io/docs/manual/programmability/)

Redis provides a programming interface allowing code execution on the redis server.

## [Functions](https://redis.io/docs/manual/programmability/functions-intro/)

The following example retrieves a key in redis, returning the value of the key, incremented by an integer. For example, if your key _foo_ has the value _17_ and we run `add('foo', 25)`, it returns the answer to Life, the Universe and Everything.

```lua
#!lua name=library

redis.register_function {
function_name = 'add',
callback = function(keys, args) return redis.call('GET', keys[1]) + args[1] end,
flags = { 'no-writes' }
}
```

Here is the same example, but in a format that can be pasted into the `redis-cli`.

```
FUNCTION LOAD "#!lua name=library\nredis.register_function{function_name='add', callback=function(keys, args) return redis.call('GET', keys[1])+args[1] end, flags={'no-writes'}}"
```

Load the prior redis function on the _redis server_ before running the example below.

```typescript
import { CommandParser, createClient, RedisArgument } from '@redis/client';
import { NumberReply } from '@redis/client/dist/lib/RESP/types.js';

const client = createClient({
functions: {
library: {
add: {
NUMBER_OF_KEYS: 1,
parseCommand(
parser: CommandParser,
key: RedisArgument,
toAdd: RedisArgument
) {
parser.pushKey(key)
parser.push(toAdd)
},
transformReply: undefined as unknown as () => NumberReply
}
}
}
});

await client.connect();
await client.set('key', '1');
await client.library.add('key', '2'); // 3
```

## [Lua Scripts](https://redis.io/docs/manual/programmability/eval-intro/)

The following is an end-to-end example of the prior concept.

```typescript
import { CommandParser, createClient, defineScript, RedisArgument } from '@redis/client';
import { NumberReply } from '@redis/client/dist/lib/RESP/types.js';

const client = createClient({
scripts: {
add: defineScript({
SCRIPT: 'return redis.call("GET", KEYS[1]) + ARGV[1];',
NUMBER_OF_KEYS: 1,
FIRST_KEY_INDEX: 1,
parseCommand(
parser: CommandParser,
key: RedisArgument,
toAdd: RedisArgument
) {
parser.pushKey(key)
parser.push(toAdd)
},
transformReply: undefined as unknown as () => NumberReply
})
}
});

await client.connect();
await client.set('key', '1');
await client.add('key', '2'); // 3
```
92 changes: 92 additions & 0 deletions docs/pub-sub.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Pub/Sub

The Pub/Sub API is implemented by `RedisClient`, `RedisCluster`, and `RedisSentinel`.

## Pub/Sub with `RedisClient`

### RESP2

Using RESP2, Pub/Sub "takes over" the connection (a client with subscriptions will not execute commands), therefore it requires a dedicated connection. You can easily get one by `.duplicate()`ing an existing `RedisClient`:

```javascript
const subscriber = client.duplicate();
subscriber.on('error', err => console.error(err));
await subscriber.connect();
```

> When working with either `RedisCluster` or `RedisSentinel`, this is handled automatically for you.
### `sharded-channel-moved` event

`RedisClient` emits the `sharded-channel-moved` event when the ["cluster slot"](https://redis.io/docs/reference/cluster-spec/#key-distribution-model) of a subscribed [Sharded Pub/Sub](https://redis.io/docs/manual/pubsub/#sharded-pubsub) channel has been moved to another shard.

The event listener signature is as follows:
```typescript
(
channel: string,
listeners: {
buffers: Set<Listener>;
strings: Set<Listener>;
}
)
```

> When working with `RedisCluster`, this is handled automatically for you.
## Subscribing

```javascript
const listener = (message, channel) => console.log(message, channel);
await client.subscribe('channel', listener);
await client.pSubscribe('channe*', listener);
// Use sSubscribe for sharded Pub/Sub:
await client.sSubscribe('channel', listener);
```

> ⚠️ Subscribing to the same channel more than once will create multiple listeners, each of which will be called when a message is received.
## Publishing

```javascript
await client.publish('channel', 'message');
// Use sPublish for sharded Pub/Sub:
await client.sPublish('channel', 'message');
```

## Unsubscribing

The code below unsubscribes all listeners from all channels.

```javascript
await client.unsubscribe();
await client.pUnsubscribe();
// Use sUnsubscribe for sharded Pub/Sub:
await client.sUnsubscribe();
```

To unsubscribe from specific channels:

```javascript
await client.unsubscribe('channel');
await client.unsubscribe(['1', '2']);
```

To unsubscribe a specific listener:

```javascript
await client.unsubscribe('channel', listener);
```

## Buffers

Publishing and subscribing using `Buffer`s is also supported:

```javascript
await subscriber.subscribe('channel', message => {
console.log(message); // <Buffer 6d 65 73 73 61 67 65>
}, true); // true = subscribe in `Buffer` mode.

await subscriber.publish(Buffer.from('channel'), Buffer.from('message'));
```

> NOTE: Buffers and strings are supported both for the channel name and the message. You can mix and match these as desired.
30 changes: 30 additions & 0 deletions docs/scan-iterators.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Scan Iterators

> :warning: The scan iterators API in v5 has breaking changes from the previous version. For more details, refer to the [v4-to-v5 guide](./v4-to-v5.md#scan-iterators).
[`SCAN`](https://redis.io/commands/scan) results can be looped over using [async iterators](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol/asyncIterator):

```javascript
for await (const keys of client.scanIterator()) {
const values = await client.mGet(keys);
}
```

This works with `HSCAN`, `SSCAN`, and `ZSCAN` too:

```javascript
for await (const entries of client.hScanIterator('hash')) {}
for await (const members of client.sScanIterator('set')) {}
for await (const membersWithScores of client.zScanIterator('sorted-set')) {}
```

You can override the default options by providing a configuration object:

```javascript
client.scanIterator({
cursor: '0', // optional, defaults to '0'
TYPE: 'string', // `SCAN` only
MATCH: 'patter*',
COUNT: 100
});
```
103 changes: 103 additions & 0 deletions docs/sentinel.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Redis Sentinel

The [Redis Sentinel](https://redis.io/docs/management/sentinel/) object of node-redis provides a high level object that provides access to a high availability redis installation managed by Redis Sentinel to provide enumeration of master and replica nodes belonging to an installation as well as reconfigure itself on demand for failover and topology changes.

## Basic Example

```javascript
import { createSentinel } from 'redis';

const sentinel = await createSentinel({
name: 'sentinel-db',
sentinelRootNodes: [{
host: 'example',
port: 1234
}]
})
.on('error', err => console.error('Redis Sentinel Error', err))
.connect();

await sentinel.set('key', 'value');
const value = await sentinel.get('key');
await sentinel.close();
```

In the above example, we configure the sentinel object to fetch the configuration for the database Redis Sentinel is monitoring as "sentinel-db" with one of the sentinels being located at `example:1234`, then using it like a regular Redis client.

## `createSentinel` configuration

| Property | Default | Description |
|----------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| name | | The sentinel identifier for a particular database cluster |
| sentinelRootNodes | | An array of root nodes that are part of the sentinel cluster, which will be used to get the topology. Each element in the array is a client configuration object. There is no need to specify every node in the cluster: 3 should be enough to reliably connect and obtain the sentinel configuration from the server |
| maxCommandRediscovers | `16` | The maximum number of times a command will retry due to topology changes. |
| nodeClientOptions | | The configuration values for every node in the cluster. Use this for example when specifying an ACL user to connect with |
| sentinelClientOptions | | The configuration values for every sentinel in the cluster. Use this for example when specifying an ACL user to connect with |
| masterPoolSize | `1` | The number of clients connected to the master node |
| replicaPoolSize | `0` | The number of clients connected to each replica node. When greater than 0, the client will distribute the load by executing read-only commands (such as `GET`, `GEOSEARCH`, etc.) across all the cluster nodes. |
| scanInterval | `10000` | Interval in milliseconds to periodically scan for changes in the sentinel topology. The client will query the sentinel for changes at this interval. |
| passthroughClientErrorEvents | `false` | When `true`, error events from client instances inside the sentinel will be propagated to the sentinel instance. This allows handling all client errors through a single error handler on the sentinel instance. |
| reserveClient | `false` | When `true`, one client will be reserved for the sentinel object. When `false`, the sentinel object will wait for the first available client from the pool. |

## PubSub

It supports PubSub via the normal mechanisms, including migrating the listeners if the node they are connected to goes down.

```javascript
await sentinel.subscribe('channel', message => {
// ...
});
await sentinel.unsubscribe('channel');
```

see [the PubSub guide](./pub-sub.md) for more details.

## Sentinel as a pool

The sentinel object provides the ability to manage a pool of clients for the master node:

```javascript
createSentinel({
// ...
masterPoolSize: 10
});
```

In addition, it also provides the ability have a pool of clients connected to the replica nodes, and to direct all read-only commands to them:

```javascript
createSentinel({
// ...
replicaPoolSize: 10
});
```

## Master client lease

Sometimes multiple commands needs to run on an exclusive client (for example, using `WATCH/MULTI/EXEC`).

There are 2 ways to get a client lease:

`.use()`
```javascript
const result = await sentinel.use(async client => {
await client.watch('key');
return client.multi()
.get('key')
.exec();
});
```

`.acquire()`
```javascript
const clientLease = await sentinel.acquire();

try {
await clientLease.watch('key');
const resp = await clientLease.multi()
.get('key')
.exec();
} finally {
clientLease.release();
}
```
6 changes: 6 additions & 0 deletions docs/todo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
- "Isolation Pool" -> pool
- Cluster request response policies (either implement, or block "server" commands in cluster)

Docs:
- [Command Options](./command-options.md)
- [RESP](./RESP.md)
53 changes: 53 additions & 0 deletions docs/transactions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# [Transactions](https://redis.io/docs/interact/transactions/) ([`MULTI`](https://redis.io/commands/multi/)/[`EXEC`](https://redis.io/commands/exec/))

Start a [transaction](https://redis.io/docs/interact/transactions/) by calling `.multi()`, then chaining your commands. When you're done, call `.exec()` and you'll get an array back with your results:

```javascript
const [setReply, getReply] = await client.multi()
.set('key', 'value')
.get('another-key')
.exec();
```

## `exec<'typed'>()`/`execTyped()`

A transaction invoked with `.exec<'typed'>`/`execTyped()` will return types appropriate to the commands in the transaction:

```javascript
const multi = client.multi().ping();
await multi.exec(); // Array<ReplyUnion>
await multi.exec<'typed'>(); // [string]
await multi.execTyped(); // [string]
```

> :warning: this only works when all the commands are invoked in a single "call chain"
## [`WATCH`](https://redis.io/commands/watch/)

You can also [watch](https://redis.io/docs/interact/transactions/#optimistic-locking-using-check-and-set) keys by calling `.watch()`. Your transaction will abort if any of the watched keys change or if the client reconnected between the `watch` and `exec` calls.

The `WATCH` state is stored on the connection (by the server). In case you need to run multiple `WATCH` & `MULTI` in parallel you'll need to use a [pool](./pool.md).

## `execAsPipeline`

`execAsPipeline` will execute the commands without "wrapping" it with `MULTI` & `EXEC` (and lose the transactional semantics).

```javascript
await client.multi()
.get('a')
.get('b')
.execAsPipeline();
```

the diffrence between the above pipeline and `Promise.all`:

```javascript
await Promise.all([
client.get('a'),
client.get('b')
]);
```

is that if the socket disconnects during the pipeline, any unwritten commands will be discarded. i.e. if the socket disconnects after `GET a` is written to the socket, but before `GET b` is:
- using `Promise.all` - the client will try to execute `GET b` when the socket reconnects
- using `execAsPipeline` - `GET b` promise will be rejected as well
4 changes: 2 additions & 2 deletions docs/v3-to-v4.md
Original file line number Diff line number Diff line change
@@ -4,7 +4,7 @@ Version 4 of Node Redis is a major refactor. While we have tried to maintain bac

## All of the Breaking Changes

See the [Change Log](../packages/client/CHANGELOG.md).
See the [Change Log](../CHANGELOG.md).

### Promises

@@ -16,7 +16,7 @@ The configuration object passed to `createClient` has changed significantly with

### No Auto Connect

In V4, the client does not automatically connect to the server, you need to run `.connect()` before any command, or you will receive error `ClientClosedError: The client is closed`.
In V4, the client does not automatically connect to the server. Instead you need to run `.connect()` after creating the client or you will receive an error: `ClientClosedError: The client is closed`.

```typescript
import { createClient } from 'redis';
245 changes: 245 additions & 0 deletions docs/v4-to-v5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,245 @@
# v4 to v5 migration guide

## Client Configuration

### Keep Alive

To better align with Node.js build-in [`net`](https://nodejs.org/api/net.html) and [`tls`](https://nodejs.org/api/tls.html) modules, the `keepAlive` option has been split into 2 options: `keepAlive` (`boolean`) and `keepAliveInitialDelay` (`number`). The defaults remain `true` and `5000`.

### Legacy Mode

In the previous version, you could access "legacy" mode by creating a client and passing in `{ legacyMode: true }`. Now, you can create one off of an existing client by calling the `.legacy()` function. This allows easier access to both APIs and enables better TypeScript support.

```javascript
// use `client` for the current API
const client = createClient();
await client.set('key', 'value');

// use `legacyClient` for the "legacy" API
const legacyClient = client.legacy();
legacyClient.set('key', 'value', (err, reply) => {
// ...
});
```

## Command Options

In v4, command options are passed as a first optional argument:

```javascript
await client.get('key'); // `string | null`
await client.get(client.commandOptions({ returnBuffers: true }), 'key'); // `Buffer | null`
```

This has a couple of flaws:
1. The argument types are checked in runtime, which is a performance hit.
2. Code suggestions are less readable/usable, due to "function overloading".
3. Overall, "user code" is not as readable as it could be.

### The new API for v5

With the new API, instead of passing the options directly to the commands we use a "proxy client" to store them:

```javascript
await client.get('key'); // `string | null`

const proxyClient = client.withCommandOptions({
typeMapping: {
[TYPES.BLOB_STRING]: Buffer
}
});

await proxyClient.get('key'); // `Buffer | null`
```

for more information, see the [Command Options guide](./command-options.md).

## Quit VS Disconnect

The `QUIT` command has been deprecated in Redis 7.2 and should now also be considered deprecated in Node-Redis. Instead of sending a `QUIT` command to the server, the client can simply close the network connection.

`client.QUIT/quit()` is replaced by `client.close()`. and, to avoid confusion, `client.disconnect()` has been renamed to `client.destroy()`.

## Scan Iterators

Iterator commands like `SCAN`, `HSCAN`, `SSCAN`, and `ZSCAN` return collections of elements (depending on the data type). However, v4 iterators loop over these collections and yield individual items:

```javascript
for await (const key of client.scanIterator()) {
console.log(key, await client.get(key));
}
```

This mismatch can be awkward and makes "multi-key" commands like `MGET`, `UNLINK`, etc. pointless. So, in v5 the iterators now yield a collection instead of an element:

```javascript
for await (const keys of client.scanIterator()) {
// we can now meaningfully utilize "multi-key" commands
console.log(keys, await client.mGet(keys));
}
```

for more information, see the [Scan Iterators guide](./scan-iterators.md).

## Isolation Pool

In v4, `RedisClient` had the ability to create a pool of connections using an "Isolation Pool" on top of the "main" connection. However, there was no way to use the pool without a "main" connection:
```javascript
const client = await createClient()
.on('error', err => console.error(err))
.connect();

await client.ping(
client.commandOptions({ isolated: true })
);
```

In v5 we've extracted this pool logic into its own class—`RedisClientPool`:

```javascript
const pool = await createClientPool()
.on('error', err => console.error(err))
.connect();

await pool.ping();
```

See the [pool guide](./pool.md) for more information.

## Cluster `MULTI`

In v4, `cluster.multi()` did not support executing commands on replicas, even if they were readonly.

```javascript
// this might execute on a replica, depending on configuration
await cluster.sendCommand('key', true, ['GET', 'key']);

// this always executes on a master
await cluster.multi()
.addCommand('key', ['GET', 'key'])
.exec();
```

To support executing commands on replicas, `cluster.multi().addCommand` now requires `isReadonly` as the second argument, which matches the signature of `cluster.sendCommand`:

```javascript
await cluster.multi()
.addCommand('key', true, ['GET', 'key'])
.exec();
```

## `MULTI.execAsPipeline()`

```javascript
await client.multi()
.set('a', 'a')
.set('b', 'b')
.execAsPipeline();
```

In older versions, if the socket disconnects during the pipeline execution, i.e. after writing `SET a a` and before `SET b b`, the returned promise is rejected, but `SET b b` will still be executed on the server.

In v5, any unwritten commands (in the same pipeline) will be discarded.

- `RedisFlushModes` -> `REDIS_FLUSH_MODES` [^enum-to-constants]

## Commands

### Redis

- `ACL GETUSER`: `selectors`
- `COPY`: `destinationDb` -> `DB`, `replace` -> `REPLACE`, `boolean` -> `number` [^boolean-to-number]
- `CLIENT KILL`: `enum ClientKillFilters` -> `const CLIENT_KILL_FILTERS` [^enum-to-constants]
- `CLUSTER FAILOVER`: `enum FailoverModes` -> `const FAILOVER_MODES` [^enum-to-constants]
- `CLIENT TRACKINGINFO`: `flags` in RESP2 - `Set<string>` -> `Array<string>` (to match RESP3 default type mapping)
- `CLUSTER INFO`:
- `CLUSTER SETSLOT`: `ClusterSlotStates` -> `CLUSTER_SLOT_STATES` [^enum-to-constants]
- `CLUSTER RESET`: the second argument is `{ mode: string; }` instead of `string` [^future-proofing]
- `CLUSTER FAILOVER`: `enum FailoverModes` -> `const FAILOVER_MODES` [^enum-to-constants], the second argument is `{ mode: string; }` instead of `string` [^future-proofing]
- `CLUSTER LINKS`: `createTime` -> `create-time`, `sendBufferAllocated` -> `send-buffer-allocated`, `sendBufferUsed` -> `send-buffer-used` [^map-keys]
- `CLUSTER NODES`, `CLUSTER REPLICAS`, `CLUSTER INFO`: returning the raw `VerbatimStringReply`
- `EXPIRE`: `boolean` -> `number` [^boolean-to-number]
- `EXPIREAT`: `boolean` -> `number` [^boolean-to-number]
- `HSCAN`: `tuples` has been renamed to `entries`
- `HEXISTS`: `boolean` -> `number` [^boolean-to-number]
- `HRANDFIELD_COUNT_WITHVALUES`: `Record<BlobString, BlobString>` -> `Array<{ field: BlobString; value: BlobString; }>` (it can return duplicates).
- `HSETNX`: `boolean` -> `number` [^boolean-to-number]
- `INFO`:
- `LCS IDX`: `length` has been changed to `len`, `matches` has been changed from `Array<{ key1: RangeReply; key2: RangeReply; }>` to `Array<[key1: RangeReply, key2: RangeReply]>`


- `ZINTER`: instead of `client.ZINTER('key', { WEIGHTS: [1] })` use `client.ZINTER({ key: 'key', weight: 1 }])`
- `ZINTER_WITHSCORES`: instead of `client.ZINTER_WITHSCORES('key', { WEIGHTS: [1] })` use `client.ZINTER_WITHSCORES({ key: 'key', weight: 1 }])`
- `ZUNION`: instead of `client.ZUNION('key', { WEIGHTS: [1] })` use `client.ZUNION({ key: 'key', weight: 1 }])`
- `ZUNION_WITHSCORES`: instead of `client.ZUNION_WITHSCORES('key', { WEIGHTS: [1] })` use `client.ZUNION_WITHSCORES({ key: 'key', weight: 1 }])`
- `ZMPOP`: `{ elements: Array<{ member: string; score: number; }>; }` -> `{ members: Array<{ value: string; score: number; }>; }` to match other sorted set commands (e.g. `ZRANGE`, `ZSCAN`)

- `MOVE`: `boolean` -> `number` [^boolean-to-number]
- `PEXPIRE`: `boolean` -> `number` [^boolean-to-number]
- `PEXPIREAT`: `boolean` -> `number` [^boolean-to-number]
- `PFADD`: `boolean` -> `number` [^boolean-to-number]

- `RENAMENX`: `boolean` -> `number` [^boolean-to-number]
- `SETNX`: `boolean` -> `number` [^boolean-to-number]
- `SCAN`, `HSCAN`, `SSCAN`, and `ZSCAN`: `reply.cursor` will not be converted to number to avoid issues when the number is bigger than `Number.MAX_SAFE_INTEGER`. See [here](https://github.com/redis/node-redis/issues/2561).
- `SCRIPT EXISTS`: `Array<boolean>` -> `Array<number>` [^boolean-to-number]
- `SISMEMBER`: `boolean` -> `number` [^boolean-to-number]
- `SMISMEMBER`: `Array<boolean>` -> `Array<number>` [^boolean-to-number]
- `SMOVE`: `boolean` -> `number` [^boolean-to-number]

- `GEOSEARCH_WITH`/`GEORADIUS_WITH`: `GeoReplyWith` -> `GEO_REPLY_WITH` [^enum-to-constants]
- `GEORADIUSSTORE` -> `GEORADIUS_STORE`
- `GEORADIUSBYMEMBERSTORE` -> `GEORADIUSBYMEMBER_STORE`
- `XACK`: `boolean` -> `number` [^boolean-to-number]
- `XADD`: the `INCR` option has been removed, use `XADD_INCR` instead
- `LASTSAVE`: `Date` -> `number` (unix timestamp)
- `HELLO`: `protover` moved from the options object to it's own argument, `auth` -> `AUTH`, `clientName` -> `SETNAME`
- `MODULE LIST`: `version` -> `ver` [^map-keys]
- `MEMORY STATS`: [^map-keys]
- `FUNCTION RESTORE`: the second argument is `{ mode: string; }` instead of `string` [^future-proofing]
- `FUNCTION STATS`: `runningScript` -> `running_script`, `durationMs` -> `duration_ms`, `librariesCount` -> `libraries_count`, `functionsCount` -> `functions_count` [^map-keys]

- `TIME`: `Date` -> `[unixTimestamp: string, microseconds: string]`

- `XGROUP_CREATECONSUMER`: [^boolean-to-number]
- `XGROUP_DESTROY`: [^boolean-to-number]
- `XINFO GROUPS`: `lastDeliveredId` -> `last-delivered-id` [^map-keys]
- `XINFO STREAM`: `radixTreeKeys` -> `radix-tree-keys`, `radixTreeNodes` -> `radix-tree-nodes`, `lastGeneratedId` -> `last-generated-id`, `maxDeletedEntryId` -> `max-deleted-entry-id`, `entriesAdded` -> `entries-added`, `recordedFirstEntryId` -> `recorded-first-entry-id`, `firstEntry` -> `first-entry`, `lastEntry` -> `last-entry`
- `XAUTOCLAIM`, `XCLAIM`, `XRANGE`, `XREVRANGE`: `Array<{ name: string; messages: Array<{ id: string; message: Record<string, string> }>; }>` -> `Record<string, Array<{ id: string; message: Record<string, string> }>>`

- `COMMAND LIST`: `enum FilterBy` -> `const COMMAND_LIST_FILTER_BY` [^enum-to-constants], the filter argument has been moved from a "top level argument" into ` { FILTERBY: { type: <MODULE|ACLCAT|PATTERN>; value: <value> } }`

### Bloom

- `TOPK.QUERY`: `Array<number>` -> `Array<boolean>`

### JSON

- `JSON.ARRINDEX`: `start` and `end` arguments moved to `{ range: { start: number; end: number; }; }` [^future-proofing]
- `JSON.ARRPOP`: `path` and `index` arguments moved to `{ path: string; index: number; }` [^future-proofing]
- `JSON.ARRLEN`, `JSON.CLEAR`, `JSON.DEBUG MEMORY`, `JSON.DEL`, `JSON.FORGET`, `JSON.OBJKEYS`, `JSON.OBJLEN`, `JSON.STRAPPEND`, `JSON.STRLEN`, `JSON.TYPE`: `path` argument moved to `{ path: string; }` [^future-proofing]

### Search

- `FT.SUGDEL`: [^boolean-to-number]
- `FT.CURSOR READ`: `cursor` type changed from `number` to `string` (in and out) to avoid issues when the number is bigger than `Number.MAX_SAFE_INTEGER`. See [here](https://github.com/redis/node-redis/issues/2561).
- `AggregateGroupByReducers` -> `FT_AGGREGATE_GROUP_BY_REDUCERS` [^enum-to-constants]
- `AggregateSteps` -> `FT_AGGREGATE_STEPS` [^enum-to-constants]
- `RedisSearchLanguages` -> `REDISEARCH_LANGUAGE` [^enum-to-constants]
- `SchemaFieldTypes` -> `SCHEMA_FIELD_TYPE` [^enum-to-constants]
- `SchemaTextFieldPhonetics` -> `SCHEMA_TEXT_FIELD_PHONETIC` [^enum-to-constants]
- `SearchOptions` -> `FtSearchOptions`
- `VectorAlgorithms` -> `SCHEMA_VECTOR_FIELD_ALGORITHM` [^enum-to-constants]

### Time Series

- `TS.ADD`: `boolean` -> `number` [^boolean-to-number]
- `TS.[M][REV]RANGE`: the `ALIGN` argument has been moved into `AGGREGATION`
- `TS.SYNUPDATE`: `Array<string | Array<string>>` -> `Record<string, Array<string>>`
- `TimeSeriesDuplicatePolicies` -> `TIME_SERIES_DUPLICATE_POLICIES` [^enum-to-constants]
- `TimeSeriesEncoding` -> `TIME_SERIES_ENCODING` [^enum-to-constants]
- `TimeSeriesAggregationType` -> `TIME_SERIES_AGGREGATION_TYPE` [^enum-to-constants]
- `TimeSeriesReducers` -> `TIME_SERIES_REDUCERS` [^enum-to-constants]
- `TimeSeriesBucketTimestamp` -> `TIME_SERIES_BUCKET_TIMESTAMP` [^enum-to-constants]

[^map-keys]: To avoid unnecessary transformations and confusion, map keys will not be transformed to "js friendly" names (i.e. `number-of-keys` will not be renamed to `numberOfKeys`). See [here](https://github.com/redis/node-redis/discussions/2506).
188 changes: 188 additions & 0 deletions docs/v5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
# RESP3 Support

Node Redis v5 adds support for [RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md), the new Redis serialization protocol. RESP3 offers richer data types and improved type handling compared to RESP2.

To use RESP3, specify it when creating your client:

```javascript
import { createClient } from 'redis';

const client = createClient({
RESP: 3
});
```

## Type Mapping

With RESP3, you can leverage the protocol's richer type system. You can customize how different Redis types are represented in JavaScript using type mapping:

```javascript
import { createClient, RESP_TYPES } from 'redis';

// By default
await client.hGetAll('key'); // Record<string, string>

// Use Map instead of plain object
await client.withTypeMapping({
[RESP_TYPES.MAP]: Map
}).hGetAll('key'); // Map<string, string>

// Use both Map and Buffer
await client.withTypeMapping({
[RESP_TYPES.MAP]: Map,
[RESP_TYPES.BLOB_STRING]: Buffer
}).hGetAll('key'); // Map<string, Buffer>
```

This replaces the previous approach of using `commandOptions({ returnBuffers: true })` in v4.

## PubSub in RESP3

RESP3 uses a different mechanism for handling Pub/Sub messages. Instead of modifying the `onReply` handler as in RESP2, RESP3 provides a dedicated `onPush` handler. When using RESP3, the client automatically uses this more efficient push notification system.

## Known Limitations

### Unstable Commands

Some Redis commands have unstable RESP3 transformations. These commands will throw an error when used with RESP3 unless you explicitly opt in to using them by setting `unstableResp3: true` in your client configuration:

```javascript
const client = createClient({
RESP: 3,
unstableResp3: true
});
```

The following commands have unstable RESP3 implementations:

1. **Stream Commands**:
- `XREAD` and `XREADGROUP` - The response format differs between RESP2 and RESP3

2. **Search Commands (RediSearch)**:
- `FT.AGGREGATE`
- `FT.AGGREGATE_WITHCURSOR`
- `FT.CURSOR_READ`
- `FT.INFO`
- `FT.PROFILE_AGGREGATE`
- `FT.PROFILE_SEARCH`
- `FT.SEARCH`
- `FT.SEARCH_NOCONTENT`
- `FT.SPELLCHECK`

3. **Time Series Commands**:
- `TS.INFO`
- `TS.INFO_DEBUG`

If you need to use these commands with RESP3, be aware that the response format might change in future versions.

# Sentinel Support

[Sentinel](./sentinel.md)

# `multi.exec<'typed'>` / `multi.execTyped`

We have introduced the ability to perform a "typed" `MULTI`/`EXEC` transaction. Rather than returning `Array<ReplyUnion>`, a transaction invoked with `.exec<'typed'>` will return types appropriate to the commands in the transaction where possible:

```javascript
const multi = client.multi().ping();
await multi.exec(); // Array<ReplyUnion>
await multi.exec<'typed'>(); // [string]
await multi.execTyped(); // [string]
```

# Client Side Caching

Node Redis v5 adds support for [Client Side Caching](https://redis.io/docs/manual/client-side-caching/), which enables clients to cache query results locally. The server will notify the client when cached results are no longer valid.

Client Side Caching is only supported with RESP3.

## Usage

There are two ways to implement client side caching:

### Anonymous Cache

```javascript
const client = createClient({
RESP: 3,
clientSideCache: {
ttl: 0, // Time-to-live in milliseconds (0 = no expiration)
maxEntries: 0, // Maximum entries to store (0 = unlimited)
evictPolicy: "LRU" // Eviction policy: "LRU" or "FIFO"
}
});
```

In this instance, the cache is managed internally by the client.

### Controllable Cache

```javascript
import { BasicClientSideCache } from 'redis';

const cache = new BasicClientSideCache({
ttl: 0,
maxEntries: 0,
evictPolicy: "LRU"
});

const client = createClient({
RESP: 3,
clientSideCache: cache
});
```

With this approach, you have direct access to the cache object for more control:

```javascript
// Manually invalidate keys
cache.invalidate(key);

// Clear the entire cache
cache.clear();

// Get cache metrics
// `cache.stats()` returns a `CacheStats` object with comprehensive statistics.
const statistics = cache.stats();

// Key metrics:
const hits = statistics.hitCount; // Number of cache hits
const misses = statistics.missCount; // Number of cache misses
const hitRate = statistics.hitRate(); // Cache hit rate (0.0 to 1.0)

// Many other metrics are available on the `statistics` object, e.g.:
// statistics.missRate(), statistics.loadSuccessCount,
// statistics.averageLoadPenalty(), statistics.requestCount()
```

## Pooled Caching

Client side caching also works with client pools. For pooled clients, the cache is shared across all clients in the pool:

```javascript
const client = createClientPool({RESP: 3}, {
clientSideCache: {
ttl: 0,
maxEntries: 0,
evictPolicy: "LRU"
},
minimum: 5
});
```

For a controllable pooled cache:

```javascript
import { BasicPooledClientSideCache } from 'redis';

const cache = new BasicPooledClientSideCache({
ttl: 0,
maxEntries: 0,
evictPolicy: "LRU"
});

const client = createClientPool({RESP: 3}, {
clientSideCache: cache,
minimum: 5
});
```
32 changes: 32 additions & 0 deletions doctests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Command examples for redis.io

## Setup

To set up the examples folder so that you can run an example / develop one of your own:

```
$ git clone https://github.com/redis/node-redis.git
$ cd node-redis
$ npm install -ws && npm run build
$ cd doctests
$ npm install
```

## How to add examples

Create regular node file in the current folder with meaningful name. It makes sense prefix example files with
command category (e.g. string, set, list, hash, etc) to make navigation in the folder easier.

### Special markup

See https://github.com/redis-stack/redis-stack-website#readme for more details.

## How to test the examples

Just include necessary assertions in the example file and run
```bash
sh doctests/run_examples.sh
```
to test all examples in the current folder.

See `tests.js` for more details.
49 changes: 49 additions & 0 deletions doctests/cmds-cnxmgmt.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// EXAMPLE: cmds_cnxmgmt
// REMOVE_START
import assert from "node:assert";
// REMOVE_END

// HIDE_START
import { createClient } from 'redis';

const client = createClient();
await client.connect().catch(console.error);
// HIDE_END

// STEP_START auth1
// REMOVE_START
await client.sendCommand(['CONFIG', 'SET', 'requirepass', 'temp_pass']);
// REMOVE_END
const res1 = await client.auth({ password: 'temp_pass' });
console.log(res1); // OK

const res2 = await client.auth({ username: 'default', password: 'temp_pass' });
console.log(res2); // OK

// REMOVE_START
assert.equal(res1, "OK");
assert.equal(res2, "OK");
await client.sendCommand(['CONFIG', 'SET', 'requirepass', '']);
// REMOVE_END
// STEP_END

// STEP_START auth2
// REMOVE_START
await client.sendCommand([
'ACL', 'SETUSER', 'test-user',
'on', '>strong_password', '+acl'
]);
// REMOVE_END
const res3 = await client.auth({ username: 'test-user', password: 'strong_password' });
console.log(res3); // OK

// REMOVE_START
assert.equal(res3, "OK");
await client.auth({ username: 'default', password: '' })
await client.sendCommand(['ACL', 'DELUSER', 'test-user']);
// REMOVE_END
// STEP_END

// HIDE_START
await client.close();
// HIDE_END
195 changes: 195 additions & 0 deletions doctests/cmds-generic.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
// EXAMPLE: cmds_generic
// REMOVE_START
import assert from "node:assert";
// REMOVE_END

// HIDE_START
import { createClient } from 'redis';

const client = createClient();
await client.connect().catch(console.error);
// HIDE_END

// STEP_START del
const delRes1 = await client.set('key1', 'Hello');
console.log(delRes1); // OK

const delRes2 = await client.set('key2', 'World');
console.log(delRes2); // OK

const delRes3 = await client.del(['key1', 'key2', 'key3']);
console.log(delRes3); // 2
// REMOVE_START
assert.equal(delRes3, 2);
// REMOVE_END
// STEP_END

// STEP_START expire
const expireRes1 = await client.set('mykey', 'Hello');
console.log(expireRes1); // OK

const expireRes2 = await client.expire('mykey', 10);
console.log(expireRes2); // 1

const expireRes3 = await client.ttl('mykey');
console.log(expireRes3); // 10
// REMOVE_START
assert.equal(expireRes3, 10);
// REMOVE_END

const expireRes4 = await client.set('mykey', 'Hello World');
console.log(expireRes4); // OK

const expireRes5 = await client.ttl('mykey');
console.log(expireRes5); // -1
// REMOVE_START
assert.equal(expireRes5, -1);
// REMOVE_END

const expireRes6 = await client.expire('mykey', 10, "XX");
console.log(expireRes6); // 0
// REMOVE_START
assert.equal(expireRes6, 0)
// REMOVE_END

const expireRes7 = await client.ttl('mykey');
console.log(expireRes7); // -1
// REMOVE_START
assert.equal(expireRes7, -1);
// REMOVE_END

const expireRes8 = await client.expire('mykey', 10, "NX");
console.log(expireRes8); // 1
// REMOVE_START
assert.equal(expireRes8, 1);
// REMOVE_END

const expireRes9 = await client.ttl('mykey');
console.log(expireRes9); // 10
// REMOVE_START
assert.equal(expireRes9, 10);
await client.del('mykey');
// REMOVE_END
// STEP_END

// STEP_START ttl
const ttlRes1 = await client.set('mykey', 'Hello');
console.log(ttlRes1); // OK

const ttlRes2 = await client.expire('mykey', 10);
console.log(ttlRes2); // 1

const ttlRes3 = await client.ttl('mykey');
console.log(ttlRes3); // 10
// REMOVE_START
assert.equal(ttlRes3, 10);
await client.del('mykey');
// REMOVE_END
// STEP_END

// STEP_START scan1
const scan1Res1 = await client.sAdd('myset', ['1', '2', '3', 'foo', 'foobar', 'feelsgood']);
console.log(scan1Res1); // 6

let scan1Res2 = [];
for await (const values of client.sScanIterator('myset', { MATCH: 'f*' })) {
scan1Res2 = scan1Res2.concat(values);
}
console.log(scan1Res2); // ['foo', 'foobar', 'feelsgood']
// REMOVE_START
console.assert(scan1Res2.sort().toString() === ['foo', 'foobar', 'feelsgood'].sort().toString());
await client.del('myset');
// REMOVE_END
// STEP_END

// STEP_START scan2
// REMOVE_START
for (let i = 1; i <= 1000; i++) {
await client.set(`key:${i}`, i);
}
// REMOVE_END
let cursor = '0';
let scanResult;

scanResult = await client.scan(cursor, { MATCH: '*11*' });
console.log(scanResult.cursor, scanResult.keys);

scanResult = await client.scan(scanResult.cursor, { MATCH: '*11*' });
console.log(scanResult.cursor, scanResult.keys);

scanResult = await client.scan(scanResult.cursor, { MATCH: '*11*' });
console.log(scanResult.cursor, scanResult.keys);

scanResult = await client.scan(scanResult.cursor, { MATCH: '*11*' });
console.log(scanResult.cursor, scanResult.keys);

scanResult = await client.scan(scanResult.cursor, { MATCH: '*11*', COUNT: 1000 });
console.log(scanResult.cursor, scanResult.keys);
// REMOVE_START
console.assert(scanResult.keys.length === 18);
cursor = '0';
const prefix = 'key:*';
do {
scanResult = await client.scan(cursor, { MATCH: prefix, COUNT: 1000 });
console.log(scanResult.cursor, scanResult.keys);
cursor = scanResult.cursor;
const keys = scanResult.keys;
if (keys.length) {
await client.del(keys);
}
} while (cursor !== '0');
// REMOVE_END
// STEP_END

// STEP_START scan3
const scan3Res1 = await client.geoAdd('geokey', { longitude: 0, latitude: 0, member: 'value' });
console.log(scan3Res1); // 1

const scan3Res2 = await client.zAdd('zkey', [{ score: 1000, value: 'value' }]);
console.log(scan3Res2); // 1

const scan3Res3 = await client.type('geokey');
console.log(scan3Res3); // zset
// REMOVE_START
console.assert(scan3Res3 === 'zset');
// REMOVE_END

const scan3Res4 = await client.type('zkey');
console.log(scan3Res4); // zset
// REMOVE_START
console.assert(scan3Res4 === 'zset');
// REMOVE_END

const scan3Res5 = await client.scan('0', { TYPE: 'zset' });
console.log(scan3Res5.keys); // ['zkey', 'geokey']
// REMOVE_START
console.assert(scan3Res5.keys.sort().toString() === ['zkey', 'geokey'].sort().toString());
await client.del(['geokey', 'zkey']);
// REMOVE_END
// STEP_END

// STEP_START scan4
const scan4Res1 = await client.hSet('myhash', { a: 1, b: 2 });
console.log(scan4Res1); // 2

const scan4Res2 = await client.hScan('myhash', '0');
console.log(scan4Res2.entries); // [{field: 'a', value: '1'}, {field: 'b', value: '2'}]
// REMOVE_START
assert.deepEqual(scan4Res2.entries, [
{ field: 'a', value: '1' },
{ field: 'b', value: '2' }
]);
// REMOVE_END

const scan4Res3 = await client.hScan('myhash', '0', { COUNT: 10 });
const items = scan4Res3.entries.map((item) => item.field)
console.log(items); // ['a', 'b']
// REMOVE_START
assert.deepEqual(items, ['a', 'b'])
await client.del('myhash');
// REMOVE_END
// STEP_END

// HIDE_START
await client.close();
// HIDE_END
109 changes: 109 additions & 0 deletions doctests/cmds-hash.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
// EXAMPLE: cmds_hash
// HIDE_START
import assert from 'node:assert';
import { createClient } from 'redis';

const client = createClient();
await client.connect().catch(console.error);
// HIDE_END

// STEP_START hset
const res1 = await client.hSet('myhash', 'field1', 'Hello')
console.log(res1) // 1

const res2 = await client.hGet('myhash', 'field1')
console.log(res2) // Hello

const res3 = await client.hSet(
'myhash',
{
'field2': 'Hi',
'field3': 'World'
}
)
console.log(res3) // 2

const res4 = await client.hGet('myhash', 'field2')
console.log(res4) // Hi

const res5 = await client.hGet('myhash', 'field3')
console.log(res5) // World

const res6 = await client.hGetAll('myhash')
console.log(res6)

// REMOVE_START
assert.equal(res1, 1);
assert.equal(res2, 'Hello');
assert.equal(res3, 2);
assert.equal(res4, 'Hi');
assert.equal(res5, 'World');
assert.deepEqual(res6, {
field1: 'Hello',
field2: 'Hi',
field3: 'World'
});
await client.del('myhash')
// REMOVE_END
// STEP_END

// STEP_START hget
const res7 = await client.hSet('myhash', 'field1', 'foo')
console.log(res7) // 1

const res8 = await client.hGet('myhash', 'field1')
console.log(res8) // foo

const res9 = await client.hGet('myhash', 'field2')
console.log(res9) // null

// REMOVE_START
assert.equal(res7, 1);
assert.equal(res8, 'foo');
assert.equal(res9, null);
await client.del('myhash')
// REMOVE_END
// STEP_END

// STEP_START hgetall
const res10 = await client.hSet(
'myhash',
{
'field1': 'Hello',
'field2': 'World'
}
)

const res11 = await client.hGetAll('myhash')
console.log(res11) // [Object: null prototype] { field1: 'Hello', field2: 'World' }

// REMOVE_START
assert.deepEqual(res11, {
field1: 'Hello',
field2: 'World'
});
await client.del('myhash')
// REMOVE_END
// STEP_END

// STEP_START hvals
const res12 = await client.hSet(
'myhash',
{
'field1': 'Hello',
'field2': 'World'
}
)

const res13 = await client.hVals('myhash')
console.log(res13) // [ 'Hello', 'World' ]

// REMOVE_START
assert.deepEqual(res13, [ 'Hello', 'World' ]);
await client.del('myhash')
// REMOVE_END
// STEP_END

// HIDE_START
await client.close();
// HIDE_END
129 changes: 129 additions & 0 deletions doctests/cmds-list.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
// EXAMPLE: cmds_list
// HIDE_START
import assert from 'node:assert';
import { createClient } from 'redis';

const client = createClient();
await client.connect().catch(console.error);
// HIDE_END

// STEP_START lpush
const res1 = await client.lPush('mylist', 'world');
console.log(res1); // 1

const res2 = await client.lPush('mylist', 'hello');
console.log(res2); // 2

const res3 = await client.lRange('mylist', 0, -1);
console.log(res3); // [ 'hello', 'world' ]

// REMOVE_START
assert.deepEqual(res3, [ 'hello', 'world' ]);
await client.del('mylist');
// REMOVE_END
// STEP_END

// STEP_START lrange
const res4 = await client.rPush('mylist', 'one');
console.log(res4); // 1

const res5 = await client.rPush('mylist', 'two');
console.log(res5); // 2

const res6 = await client.rPush('mylist', 'three');
console.log(res6); // 3

const res7 = await client.lRange('mylist', 0, 0);
console.log(res7); // [ 'one' ]

const res8 = await client.lRange('mylist', -3, 2);
console.log(res8); // [ 'one', 'two', 'three' ]

const res9 = await client.lRange('mylist', -100, 100);
console.log(res9); // [ 'one', 'two', 'three' ]

const res10 = await client.lRange('mylist', 5, 10);
console.log(res10); // []

// REMOVE_START
assert.deepEqual(res7, [ 'one' ]);
assert.deepEqual(res8, [ 'one', 'two', 'three' ]);
assert.deepEqual(res9, [ 'one', 'two', 'three' ]);
assert.deepEqual(res10, []);
await client.del('mylist');
// REMOVE_END
// STEP_END

// STEP_START llen
const res11 = await client.lPush('mylist', 'World');
console.log(res11); // 1

const res12 = await client.lPush('mylist', 'Hello');
console.log(res12); // 2

const res13 = await client.lLen('mylist');
console.log(res13); // 2

// REMOVE_START
assert.equal(res13, 2);
await client.del('mylist');
// REMOVE_END
// STEP_END

// STEP_START rpush
const res14 = await client.rPush('mylist', 'hello');
console.log(res14); // 1

const res15 = await client.rPush('mylist', 'world');
console.log(res15); // 2

const res16 = await client.lRange('mylist', 0, -1);
console.log(res16); // [ 'hello', 'world' ]

// REMOVE_START
assert.deepEqual(res16, [ 'hello', 'world' ]);
await client.del('mylist');
// REMOVE_END
// STEP_END

// STEP_START lpop
const res17 = await client.rPush('mylist', ["one", "two", "three", "four", "five"]);
console.log(res17); // 5

const res18 = await client.lPop('mylist');
console.log(res18); // 'one'

const res19 = await client.lPopCount('mylist', 2);
console.log(res19); // [ 'two', 'three' ]

const res20 = await client.lRange('mylist', 0, -1);
console.log(res20); // [ 'four', 'five' ]

// REMOVE_START
assert.deepEqual(res20, [ 'four', 'five' ]);
await client.del('mylist');
// REMOVE_END
// STEP_END

// STEP_START rpop
const res21 = await client.rPush('mylist', ["one", "two", "three", "four", "five"]);
console.log(res21); // 5

const res22 = await client.rPop('mylist');
console.log(res22); // 'five'

const res23 = await client.rPopCount('mylist', 2);
console.log(res23); // [ 'four', 'three' ]

const res24 = await client.lRange('mylist', 0, -1);
console.log(res24); // [ 'one', 'two' ]

// REMOVE_START
assert.deepEqual(res24, [ 'one', 'two' ]);
await client.del('mylist');
// REMOVE_END
// STEP_END

// HIDE_START
await client.close();
// HIDE_END
Loading