Core delivery platform Node.js Backend Template.
Please install Node.js >= v22
and npm >= v11
. You will find it
easier to use the Node Version Manager nvm
To use the correct version of Node.js for this application, via nvm:
cd forms-audit-api
nvm use
-
Install Docker
-
Bring up runtime dependencies
docker compose up
- Create a
.env
file with the following mandatory environment variables populated at root level:
MONGO_URI='mongodb://localhost:27017/?replicaSet=rs0&directConnection=true'
FORMS_AUDIT_QUEUE='forms_audit_events'
LOG_LEVEL=debug
SQS_ENDPOINT=http://localhost:4566
AWS_REGION=eu-west-2
EVENTS_SQS_QUEUE_URL=http://sqs.eu-west-2.127.0.0.1:4566/000000000000/forms_audit_events
AWS_ACCESS_KEY_ID=dummy
AWS_SECRET_ACCESS_KEY=dummy
RECEIVE_MESSAGE_TIMEOUT_MS=30000
SQS_MAX_NUMBER_OF_MESSAGES=10
SQS_VISIBILITY_TIMEOUT=30
ENTITLEMENT_URL=http://localhost:3004
For proxy options, see https://www.npmjs.com/package/proxy-from-env which is used by https://github.com/TooTallNate/proxy-agents/tree/main/packages/proxy-agent. It's currently supports Hapi Wreck only, e.g. in the JWKS lookup.
Install application dependencies:
npm install
To run the application in development
mode run:
npm run dev
To test the application run:
npm run test
To mimic the application running in production
mode locally run:
npm start
ReceiveMessageWaitTime
- this is probably the most important queue setting and controls what amazon call long polling vs short polling. When ReceiveMessageWaitTime
is greater than 0, long polling is in effect. The max ReceiveMessageWaitTime
is 20s.
This is the code affect by this setting:
export function receiveEventMessages() {
const command = new ReceiveMessageCommand(input)
return sqsClient.send(command)
}
With short-polling, line 3 fetches any messages from SQS and yields immediately. It may sample only a subset of the queue’s partitions, so you might not see all currently available messages on that call.
With long-polling, if there aren’t any messages found, the HTTP connection is kept open for up to 20s until some arrive. The consumer of receiveEventMessages is left waiting while that happens. It will fetch all currently available messages up to the message limit across the queue's partitions.
By default, CDP set ReceiveMessageWaitTime
to 20s. The auditing queue also uses this default.
See here for more information.
RECEIVE_MESSAGE_TIMEOUT_MS
- the amount of time to wait between calls to receive messages
SQS_MAX_NUMBER_OF_MESSAGES
- the number of messages to receive at once (max 10)
SQS_VISIBILITY_TIMEOUT
- when receiving a message from an Amazon SQS queue, it remains in the queue but becomes temporarily invisible to other consumers. This invisibility is controlled by the visibility timeout, which ensures that other consumers cannot process the same message while you are working on it.
All available Npm scripts can be seen in package.json. To view them in your command line run:
npm run
To update dependencies use npm-check-updates:
The following script is a good start. Check out all the options on the npm-check-updates
ncu --interactive --format group
If you are having issues with formatting of line breaks on Windows update your global git config by running:
git config --global core.autocrlf false
Endpoint | Description |
---|---|
GET: /health |
Health |
GET: /example |
Example API (remove as needed) |
GET: /example/<id> |
Example API (remove as needed) |
If you require a write lock for Mongo you can acquire it via server.locker
or request.locker
:
async function doStuff(server) {
const lock = await server.locker.lock('unique-resource-name')
if (!lock) {
// Lock unavailable
return
}
try {
// do stuff
} finally {
await lock.free()
}
}
Keep it small and atomic.
You may use using for the lock resource management. Note test coverage reports do not like that syntax.
async function doStuff(server) {
await using lock = await server.locker.lock('unique-resource-name')
if (!lock) {
// Lock unavailable
return
}
// do stuff
// lock automatically released
}
Helper methods are also available in /src/helpers/mongo-lock.js
.
We are using forward-proxy which is set up by default. To make use of this: import { fetch } from 'undici'
then
because of the setGlobalDispatcher(new ProxyAgent(proxyUrl))
calls will use the ProxyAgent Dispatcher
If you are not using Wreck, Axios or Undici or a similar http that uses Request
. Then you may have to provide the
proxy dispatcher:
To add the dispatcher to your own client:
import { ProxyAgent } from 'undici'
return await fetch(url, {
dispatcher: new ProxyAgent({
uri: proxyUrl,
keepAliveTimeout: 10,
keepAliveMaxTimeout: 10
})
})
Build:
docker build --target development --no-cache --tag forms-audit-api:development .
Run:
docker run -e PORT=3003 -p 3003:3003 forms-audit-api:development
Build:
docker build --no-cache --tag forms-audit-api .
Run:
docker run -e PORT=3003 -p 3003:3003 forms-audit-api
A local environment with:
- Localstack for AWS services (S3, SQS)
- Redis
- MongoDB
- This service.
- A commented out frontend example.
docker compose up --build -d
We have added an example dependabot configuration file to the repository. You can enable it by renaming
the .github/example.dependabot.yml to .github/dependabot.yml
Instructions for setting up SonarCloud can be found in sonar-project.properties
THIS INFORMATION IS LICENSED UNDER THE CONDITIONS OF THE OPEN GOVERNMENT LICENCE found at:
http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3
The following attribution statement MUST be cited in your products and applications when using this information.
Contains public sector information licensed under the Open Government license v3
The Open Government Licence (OGL) was developed by the Controller of Her Majesty's Stationery Office (HMSO) to enable information providers in the public sector to license the use and re-use of their information under a common open licence.
It is designed to encourage use and re-use of information freely and flexibly, with only a few conditions.