Skip to content

chore(repo): Fix nightly e2e tests #6262

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 26 additions & 3 deletions .github/actions/verdaccio/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,15 @@ runs:
shell: bash
run: |
nohup ./node_modules/.bin/verdaccio --config ./verdaccio.publish.yaml & echo "VERDACCIO_PID=$!" >> $GITHUB_ENV
sleep 5
# Wait for Verdaccio to start and verify it's responding
for i in {1..10}; do
if curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
echo "Verdaccio is up and running"
break
fi
echo "Waiting for Verdaccio to start (attempt $i)..."
sleep 2
done
pnpm config set $(echo ${{ inputs.registry }} | sed -E 's/https?://')/:_authToken secretToken
Comment on lines 34 to 44
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Add an explicit failure after the readiness-polling loop

If Verdaccio never comes up within the 10 attempts, the script silently proceeds to the publish step and will fail later with a much less obvious error. Abort early to surface the root cause:

 for i in {1..10}; do
   if curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
     echo "Verdaccio is up and running"
     break
   fi
   echo "Waiting for Verdaccio to start (attempt $i)..."
   sleep 2
 done
+# Bail out if Verdaccio never became ready
+if ! curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
+  echo "Verdaccio did not start after 10 attempts" >&2
+  exit 1
+fi

Enforce failure if Verdaccio isn’t ready

Add an explicit check after the readiness-poll loop in .github/actions/verdaccio/action.yml so the workflow aborts early if Verdaccio never comes up:

 for i in {1..10}; do
   if curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
     echo "Verdaccio is up and running"
     break
   fi
   echo "Waiting for Verdaccio to start (attempt $i)…"
   sleep 2
 done
+# Bail out if Verdaccio never became ready
+if ! curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
+  echo "Verdaccio did not start after 10 attempts" >&2
+  exit 1
+fi
 pnpm config set $(echo ${{ inputs.registry }} | sed -E 's/https?://')/:_authToken secretToken
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
nohup ./node_modules/.bin/verdaccio --config ./verdaccio.publish.yaml & echo "VERDACCIO_PID=$!" >> $GITHUB_ENV
sleep 5
# Wait for Verdaccio to start and verify it's responding
for i in {1..10}; do
if curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
echo "Verdaccio is up and running"
break
fi
echo "Waiting for Verdaccio to start (attempt $i)..."
sleep 2
done
pnpm config set $(echo ${{ inputs.registry }} | sed -E 's/https?://')/:_authToken secretToken
nohup ./node_modules/.bin/verdaccio --config ./verdaccio.publish.yaml & echo "VERDACCIO_PID=$!" >> $GITHUB_ENV
# Wait for Verdaccio to start and verify it's responding
for i in {1..10}; do
if curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
echo "Verdaccio is up and running"
break
fi
echo "Waiting for Verdaccio to start (attempt $i)..."
sleep 2
done
# Bail out if Verdaccio never became ready
if ! curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
echo "Verdaccio did not start after 10 attempts" >&2
exit 1
fi
pnpm config set $(echo ${{ inputs.registry }} | sed -E 's/https?://')/:_authToken secretToken
🤖 Prompt for AI Agents
In .github/actions/verdaccio/action.yml around lines 34 to 44, after the loop
that waits for Verdaccio to start, add a conditional check to verify if
Verdaccio is actually up. If the service is not responding after all attempts,
exit the script with a non-zero status to fail the workflow early. This ensures
the action does not proceed when Verdaccio is not ready.


- name: Publish to Verdaccio
Expand All @@ -41,10 +49,25 @@ runs:

- name: Stop Verdaccio
shell: bash
run: kill -9 $VERDACCIO_PID
run: |
if [ -n "$VERDACCIO_PID" ]; then
kill -9 $VERDACCIO_PID || true
sleep 2
fi
Comment on lines +52 to +56
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Missing symmetrical shutdown for the second Verdaccio instance

You stop the first Verdaccio (publish) instance but never terminate the second one started with the install config.
This can leave an orphaned background process in the runner, potentially contaminating parallel jobs.

Add a second “Stop Verdaccio (install)” step or trap the PID so both instances are always cleaned up.

🤖 Prompt for AI Agents
In .github/actions/verdaccio/action.yml around lines 52 to 56, the script stops
the first Verdaccio instance using its PID but does not stop the second
Verdaccio instance started with the install config, risking orphaned processes.
Modify the script to capture and store the PID of the second Verdaccio instance
as well, then add a corresponding kill command to terminate it, ensuring both
instances are properly shut down and cleaned up.


- name: Run Verdaccio (using install config)
shell: bash
run: |
nohup ./node_modules/.bin/verdaccio --config ./verdaccio.install.yaml & sleep 5
nohup ./node_modules/.bin/verdaccio --config ./verdaccio.install.yaml & echo "VERDACCIO_PID=$!" >> $GITHUB_ENV
# Wait for Verdaccio to start and verify it's responding
for i in {1..10}; do
if curl -f ${{ inputs.registry }}/ > /dev/null 2>&1; then
echo "Verdaccio is up and running"
break
fi
echo "Waiting for Verdaccio to start (attempt $i)..."
sleep 2
done
pnpm config set $(echo ${{ inputs.registry }} | sed -E 's/https?://')/:_authToken secretToken
# Verify proxy is working by trying to fetch a known package
pnpm view semver > /dev/null 2>&1 || echo "Warning: Could not fetch semver package, proxy might not be working"
50 changes: 33 additions & 17 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ jobs:
name: Static analysis
permissions:
contents: read
actions: write # needed for actions/upload-artifact
actions: write # needed for actions/upload-artifact
runs-on: 'blacksmith-8vcpu-ubuntu-2204'
defaults:
run:
Expand Down Expand Up @@ -193,7 +193,7 @@ jobs:
name: Unit Tests
permissions:
contents: read
actions: write # needed for actions/upload-artifact
actions: write # needed for actions/upload-artifact
runs-on: 'blacksmith-8vcpu-ubuntu-2204'
defaults:
run:
Expand All @@ -208,9 +208,9 @@ jobs:
matrix:
include:
- node-version: 18
test-filter: "--filter=@clerk/astro --filter=@clerk/backend --filter=@clerk/express --filter=@clerk/nextjs --filter=@clerk/clerk-react --filter=@clerk/shared --filter=@clerk/remix --filter=@clerk/tanstack-react-start --filter=@clerk/elements --filter=@clerk/vue --filter=@clerk/nuxt --filter=@clerk/clerk-expo"
test-filter: '--filter=@clerk/astro --filter=@clerk/backend --filter=@clerk/express --filter=@clerk/nextjs --filter=@clerk/clerk-react --filter=@clerk/shared --filter=@clerk/remix --filter=@clerk/tanstack-react-start --filter=@clerk/elements --filter=@clerk/vue --filter=@clerk/nuxt --filter=@clerk/clerk-expo'
- node-version: 22
test-filter: "**"
test-filter: '**'

steps:
- name: Checkout Repo
Expand Down Expand Up @@ -266,7 +266,7 @@ jobs:
name: Integration Tests
permissions:
contents: read
actions: write # needed for actions/upload-artifact
actions: write # needed for actions/upload-artifact
runs-on: 'blacksmith-8vcpu-ubuntu-2204'
defaults:
run:
Expand All @@ -276,7 +276,24 @@ jobs:
strategy:
fail-fast: false
matrix:
test-name: ['generic', 'express', 'quickstart', 'ap-flows', 'elements', 'localhost', 'sessions', 'astro', 'expo-web', 'tanstack-react-start', 'tanstack-react-router', 'vue', 'nuxt', 'react-router', 'billing']
test-name:
[
'generic',
'express',
'quickstart',
'ap-flows',
'elements',
'localhost',
'sessions',
'astro',
'expo-web',
'tanstack-react-start',
'tanstack-react-router',
'vue',
'nuxt',
'react-router',
'billing',
]
test-project: ['chrome']
include:
- test-name: 'nextjs'
Expand Down Expand Up @@ -350,16 +367,16 @@ jobs:
INTEGRATION_CERTS: '${{secrets.INTEGRATION_CERTS}}'
INTEGRATION_ROOT_CA: '${{secrets.INTEGRATION_ROOT_CA}}'
with:
script: |
const fs = require('fs');
const path = require('path');
const rootCa = process.env.INTEGRATION_ROOT_CA;
console.log('rootCa', rootCa);
fs.writeFileSync(path.join(process.env.GITHUB_WORKSPACE, 'integration/certs', 'rootCA.pem'), rootCa);
const certs = JSON.parse(process.env.INTEGRATION_CERTS);
for (const [name, cert] of Object.entries(certs)) {
fs.writeFileSync(path.join(process.env.GITHUB_WORKSPACE, 'integration/certs', name), cert);
}
script: |
const fs = require('fs');
const path = require('path');
const rootCa = process.env.INTEGRATION_ROOT_CA;
console.log('rootCa', rootCa);
fs.writeFileSync(path.join(process.env.GITHUB_WORKSPACE, 'integration/certs', 'rootCA.pem'), rootCa);
const certs = JSON.parse(process.env.INTEGRATION_CERTS);
for (const [name, cert] of Object.entries(certs)) {
fs.writeFileSync(path.join(process.env.GITHUB_WORKSPACE, 'integration/certs', name), cert);
}

- name: LS certs
if: ${{ steps.task-status.outputs.affected == '1' }}
Expand All @@ -381,7 +398,6 @@ jobs:
MAILSAC_API_KEY: ${{ secrets.MAILSAC_API_KEY }}
NODE_EXTRA_CA_CERTS: ${{ github.workspace }}/integration/certs/rootCA.pem


- name: Upload test-results
if: ${{ cancelled() || failure() }}
uses: actions/upload-artifact@v4
Expand Down
56 changes: 53 additions & 3 deletions .github/workflows/nightly-checks.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,24 @@
name: Nightly upstream tests
on:
pull_request:
branches:
- main
- release/v4
workflow_dispatch:
schedule:
- cron: '0 7 * * *'

jobs:
integration-tests:
name: Integration Tests
runs-on: ${{ vars.RUNNER_NORMAL || 'ubuntu-latest' }}
runs-on: ${{ vars.RUNNER_NORMAL || 'ubuntu-latest' }}\
timeout-minutes: ${{ vars.TIMEOUT_MINUTES_EXTENDED && fromJSON(vars.TIMEOUT_MINUTES_EXTENDED) || 30 }}

strategy:
matrix:
test-name: ['nextjs']
# Don't cancel other matrix jobs if one fails
fail-fast: false

steps:
- name: Checkout Repo
Expand Down Expand Up @@ -40,12 +46,31 @@ jobs:
working-directory: ./integration
run: pnpm init && pnpm add @clerk/backend

- name: List installed packages in /integration
working-directory: ./integration
run: |
echo "=== Listing installed packages in /integration ==="
pnpm list --json
echo "=== Detailed package info for @clerk/backend ==="
pnpm list @clerk/backend

- name: Install @clerk/clerk-js in os temp
working-directory: ${{runner.temp}}
run: mkdir clerk-js && cd clerk-js && pnpm init && pnpm add @clerk/clerk-js

Comment on lines 58 to 60
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

pnpm init is interactive – CI will hang

pnpm init prompts for package-json answers unless --yes (or -y) is supplied.
On GitHub runners this blocks the job indefinitely.

-run: mkdir clerk-js && cd clerk-js && pnpm init && pnpm add @clerk/clerk-js
+run: mkdir clerk-js && cd clerk-js && pnpm init -y && pnpm add @clerk/clerk-js
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
working-directory: ${{runner.temp}}
run: mkdir clerk-js && cd clerk-js && pnpm init && pnpm add @clerk/clerk-js
working-directory: ${{ runner.temp }}
run: mkdir clerk-js && cd clerk-js && pnpm init -y && pnpm add @clerk/clerk-js
🤖 Prompt for AI Agents
In .github/workflows/nightly-checks.yml at lines 58 to 60, the command `pnpm
init` is used without the `--yes` flag, causing it to run interactively and hang
the CI job. Modify the command to include `--yes` (or `-y`) so it runs
non-interactively, for example, change `pnpm init` to `pnpm init --yes` to
prevent the job from blocking.

- name: Run Integration Tests
run: pnpm turbo test:integration:${{ matrix.test-name }} $TURBO_ARGS --only
id: integration_tests
continue-on-error: true
run: |
# Capture the output and exit code
OUTPUT_FILE="${{runner.temp}}/test-output.log"
# Only run Typedoc tests for one matrix version
if [ "${{ matrix.test-name }}" == "nextjs" ]; then
E2E_DEBUG=1 E2E_APP_ID=quickstart.next.appRouter pnpm test:integration:base --grep @quickstart 2>&1 | tee "$OUTPUT_FILE"
else
E2E_DEBUG=1 pnpm turbo test:integration:${{ matrix.test-name }} $TURBO_ARGS --only 2>&1 | tee "$OUTPUT_FILE"
fi
echo "exit_code=${PIPESTATUS[0]}" >> $GITHUB_OUTPUT
env:
E2E_APP_CLERK_JS_DIR: ${{runner.temp}}
E2E_CLERK_VERSION: 'latest'
Expand All @@ -56,11 +81,36 @@ jobs:
INTEGRATION_INSTANCE_KEYS: ${{ secrets.INTEGRATION_INSTANCE_KEYS }}
MAILSAC_API_KEY: ${{ secrets.MAILSAC_API_KEY }}

# Upload test artifacts if tests failed
- name: Upload Test Artifacts
if: steps.integration_tests.outputs.exit_code != '0'
uses: actions/upload-artifact@v4
with:
name: test-artifacts-${{ matrix.test-name }}
path: |
${{runner.temp}}/test-output.log
integration/test-results/
integration/.next/
${{runner.temp}}/clerk-js/node_modules/
retention-days: 7

- name: Report Status
if: always()
uses: ravsamhq/notify-slack-action@v1
with:
status: ${{ job.status }}
status: ${{ steps.integration_tests.outputs.exit_code == '0' && 'success' || 'failure' }}
notify_when: 'failure'
notification_title: 'Integration Test Failure - ${{ matrix.test-name }}'
message_format: |
*Job:* ${{ github.workflow }} (${{ matrix.test-name }})
*Status:* ${{ steps.integration_tests.outputs.exit_code == '0' && 'Success' || 'Failed' }}
*Commit:* ${{ github.sha }}
*PR:* ${{ github.event.pull_request.html_url }}
*Artifacts:* ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
Comment on lines +101 to +109
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Slack notification ignores second test run – potential false green

status: and message fields only look at integration_tests.exit_code.
If the first run passes and the second fails, the workflow reports success.

Consider aggregating both steps, or emit a combined output and reference that:

-          status: ${{ steps.integration_tests.outputs.exit_code == '0' && 'success' || 'failure' }}
+          status: ${{ (steps.integration_tests.outputs.exit_code == '0' && steps.integration_tests2.outputs.exit_code == '0') && 'success' || 'failure' }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
status: ${{ steps.integration_tests.outputs.exit_code == '0' && 'success' || 'failure' }}
notify_when: 'failure'
notification_title: 'Integration Test Failure - ${{ matrix.test-name }}'
message_format: |
*Job:* ${{ github.workflow }} (${{ matrix.test-name }})
*Status:* ${{ steps.integration_tests.outputs.exit_code == '0' && 'Success' || 'Failed' }}
*Commit:* ${{ github.sha }}
*PR:* ${{ github.event.pull_request.html_url }}
*Artifacts:* ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
status: ${{ (steps.integration_tests.outputs.exit_code == '0' && steps.integration_tests2.outputs.exit_code == '0') && 'success' || 'failure' }}
notify_when: 'failure'
notification_title: 'Integration Test Failure - ${{ matrix.test-name }}'
message_format: |
*Job:* ${{ github.workflow }} (${{ matrix.test-name }})
*Status:* ${{ steps.integration_tests.outputs.exit_code == '0' && 'Success' || 'Failed' }}
*Commit:* ${{ github.sha }}
*PR:* ${{ github.event.pull_request.html_url }}
*Artifacts:* ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
🤖 Prompt for AI Agents
In .github/workflows/nightly-checks.yml around lines 117 to 125, the Slack
notification only checks the exit code of the integration_tests step, ignoring
the second test run, which can cause false success reports. Modify the workflow
to aggregate the exit codes of both test runs into a combined output or status
variable, then update the notification fields to reference this combined result
to accurately reflect the overall test outcome.

env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_SDK_E2E_ALERTS_WEBHOOK_URL }}

# Fail the workflow if tests failed
- name: Check Test Status
if: steps.integration_tests.outputs.exit_code != '0'
run: exit 1
3 changes: 2 additions & 1 deletion .github/workflows/pr-title-linter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ on:
jobs:
pr-title-lint:
runs-on: ubuntu-latest
# DISABLED: This workflow is temporarily disabled. Remove this condition to re-enable.
steps:
- name: Checkout Repo
uses: actions/checkout@v4
Expand All @@ -27,4 +28,4 @@ jobs:
run: |
npm init --scope=clerk --yes
npm i --save-dev @commitlint/config-conventional @commitlint/cli globby --audit=false --fund=false
echo '${{ github.event.pull_request.title }}' | npm exec @commitlint/cli -- --config commitlint.config.ts
echo '${{ github.event.pull_request.title }}' | npm exec @commitlint/cli -- --config commitlint.config.ts
1 change: 0 additions & 1 deletion .prettierignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
.cache
.changeset
.github
.idea
.next
.temp_integration
Expand Down
4 changes: 4 additions & 0 deletions integration/models/applicationConfig.ts
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,10 @@ export const applicationConfig = () => {
}
return self;
},
/**
* Creates a new application directory and copies the template files (and any overrides) to it.
* The application directory is created in the `constants.TMP_DIR` directory.
*/
commit: async (opts?: { stableHash?: string }) => {
const { stableHash } = opts || {};
logger.info(`Creating project "${name}"`);
Expand Down
13 changes: 13 additions & 0 deletions integration/models/longRunningApplication.ts
Original file line number Diff line number Diff line change
Expand Up @@ -60,14 +60,17 @@ export const longRunningApplication = (params: LongRunningApplicationParams) =>
// the first time this is called, the app starts and the state is persisted in the state file
init: async () => {
try {
console.log(`[${name}] Starting initialization...`);
const publishableKey = params.env.publicVariables.get('CLERK_PUBLISHABLE_KEY');
const secretKey = params.env.privateVariables.get('CLERK_SECRET_KEY');
const apiUrl = params.env.privateVariables.get('CLERK_API_URL');
const { instanceType, frontendApi: frontendApiUrl } = parsePublishableKey(publishableKey);
console.log(`[${name}] Instance type: ${instanceType}, Frontend API URL: ${frontendApiUrl}`);

if (instanceType !== 'development') {
console.log('Clerk: skipping setup of testing tokens for non-development instance');
} else {
console.log(`[${name}] Setting up testing tokens...`);
await clerkSetup({
publishableKey,
frontendApiUrl,
Expand All @@ -76,32 +79,42 @@ export const longRunningApplication = (params: LongRunningApplicationParams) =>
apiUrl,
dotenv: false,
});
console.log(`[${name}] Testing tokens setup completed`);
}
} catch (error) {
console.error('Error setting up testing tokens:', error);
throw error;
}
try {
console.log(`[${name}] Committing config...`);
app = await config.commit();
console.log(`[${name}] Config committed successfully`);
} catch (error) {
console.error('Error committing config:', error);
throw error;
}
try {
console.log(`[${name}] Setting up environment...`);
await app.withEnv(params.env);
console.log(`[${name}] Environment setup completed`);
} catch (error) {
console.error('Error setting up environment:', error);
throw error;
}
try {
console.log(`[${name}] Running app setup...`);
await app.setup();
console.log(`[${name}] App setup completed`);
} catch (error) {
console.error('Error during app setup:', error);
throw error;
}
try {
console.log(`[${name}] Starting app in dev mode...`);
const { port, serverUrl, pid } = await app.dev({ detached: true });
console.log(`[${name}] App started successfully - Port: ${port}, PID: ${pid}, URL: ${serverUrl}`);
stateFile.addLongRunningApp({ port, serverUrl, pid, id, appDir: app.appDir, env: params.env.toJson() });
console.log(`[${name}] State file updated with app information`);
} catch (error) {
console.error('Error during app dev:', error);
throw error;
Expand Down
8 changes: 7 additions & 1 deletion integration/models/stateFile.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { constants } from '../constants';
import { fs } from '../scripts';
import type { EnvironmentConfig } from './environment';

type AppParams = {
export type AppParams = {
id: string;
port: number;
serverUrl: string;
Expand Down Expand Up @@ -83,6 +83,11 @@ const createStateFile = () => {
return read().clerkJsHttpServerPid;
};

const debug = () => {
const json = read();
console.log('state file', JSON.stringify(json, null, 2));
};
Comment on lines +86 to +89
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add explicit return type and JSDoc documentation.

The debug method is missing an explicit return type and JSDoc documentation as required for public APIs.

+ /**
+  * Logs the current state file content in a pretty-printed format for debugging purposes.
+  */
- const debug = () => {
+ const debug = (): void => {
   const json = read();
   console.log('state file', JSON.stringify(json, null, 2));
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const debug = () => {
const json = read();
console.log('state file', JSON.stringify(json, null, 2));
};
/**
* Logs the current state file content in a pretty-printed format for debugging purposes.
*/
const debug = (): void => {
const json = read();
console.log('state file', JSON.stringify(json, null, 2));
};
🤖 Prompt for AI Agents
In integration/models/stateFile.ts around lines 86 to 89, the debug function
lacks an explicit return type and JSDoc documentation. Add a JSDoc comment above
the debug function describing its purpose and specify that it returns void.
Also, explicitly declare the return type of the debug function as void in its
signature.


return {
remove,
setStandAloneApp,
Expand All @@ -91,6 +96,7 @@ const createStateFile = () => {
getClerkJsHttpServerPid,
addLongRunningApp,
getLongRunningApps,
debug,
};
};

Expand Down
6 changes: 3 additions & 3 deletions integration/presets/envs.ts
Original file line number Diff line number Diff line change
Expand Up @@ -151,9 +151,9 @@ const withSessionTasks = base
.setEnvVariable('public', 'CLERK_PUBLISHABLE_KEY', instanceKeys.get('with-session-tasks').pk)
.setEnvVariable('private', 'CLERK_ENCRYPTION_KEY', constants.E2E_CLERK_ENCRYPTION_KEY || 'a-key');

const withBillingStaging = base
const withBillingJwtV2 = base
.clone()
.setId('withBillingStaging')
.setId('withBillingJwtV2')
.setEnvVariable('private', 'CLERK_API_URL', 'https://api.clerkstage.dev')
.setEnvVariable('private', 'CLERK_SECRET_KEY', instanceKeys.get('with-billing-staging').sk)
.setEnvVariable('public', 'CLERK_PUBLISHABLE_KEY', instanceKeys.get('with-billing-staging').pk);
Expand Down Expand Up @@ -197,7 +197,7 @@ export const envs = {
withSignInOrUpEmailLinksFlow,
withSignInOrUpwithRestrictedModeFlow,
withSessionTasks,
withBillingStaging,
withBillingJwtV2,
withBilling,
withWhatsappPhoneCode,
sessionsProd1,
Expand Down
Loading
Loading