Every function in OpenFaaS can be invoked either synchronously or asynchronously. Asynchronous invocations are retried automatically, and can return their response to a given endpoint via a webhook.
Popular use-cases include:
Batch processing and machine learning
Resilient data pipelines
Receiving webhooks
Long running jobs
On the blog we show reference examples built upon these architectural patterns:
NATS - an open source messaging system hosted by the CNCF
NATS JetStream - a messaging system built on top of NATS for durable queues and message streams
A JetStream Server is the original NATS Core project, running in "jetstream mode"
A Stream is a message store it is used in OpenFaaS to queue async invocation messages.
A Consumer is a stateful view of a stream when clients consume messages from a stream the consumer keeps track of which messages were delivered and acknowledged.
For staging and development environments OpenFaaS can be deployed with an embedded version of the NATS server without persitance. This is the default when you install OpenFaaS using the OpenFaaS Helm chart.
If the NATS Pod restarts, you will lose all messages that it contains. This could happen if you update the chart and the version of the NATS server has changed, or if a node is removed from the cluster.
External NATS server
For production environments you should install NATS separately using its Helm chart.
NATS can be configured with a quorum of at least 3 replicas so it can recover data if one of the replicas should crash. You can also enable a persistent volume in the NATS chart for additional durability.
The queue-worker uses a shared NATS Stream and NATS Consumer by default, which works well with many of the existing autoscaling strategies. Requests are processed in a FIFO order, and it is possible for certain functions to dominate or starve the queue.
A fairer approach is to scale functions based upon their respective queue depth, with a consumer created for each function as and when it is needed.
The mode parameter can be set to static (default) or function.
If set to static, the queue-worker will scale its NATS Consumers based upon the number of replicas of the queue-worker. This is the default mode, and ideal for development, or constrained environments.
If set to function, the queue-worker will scale its NATS Consumers based upon the number of functions that are active in the queue. This is ideal for production environments where you want to scale your functions based upon the queue depth. It also gives messages queued at different times a fairer chance of being processed earlier.
The inactiveThreshold parameter can be used to set the threshold for when a function is considered inactive. If a function is inactive for longer than the threshold, the queue-worker will delete the NATS Consumer for that function.
Get insight into the behaviour of your queues with built in metrics.
Prometheus metrics are available for monitoring things like the number of messages that have been submitted to the queue over a period of time, how many messages are waiting to be completed and the total number of messages that were processed.
An overview of all the available metrics can be found in the metrics reference
OpenFaaS ships with a “mixed queue”, where all invocations run in the same queue. If you have special requirements, you can set up your own separate queue and queue-worker using the queue-worker helm chart.
Users can specify a list of HTTP codes that should be retried a number of times using an exponential back-off algorithm to mitigate the impact associated with retrying messages.
From time to time, you may wish to reset or purge an async message queue. You may have generated a large number of unnecessary messages that you want to remove from the queue.
Each queue-worker deployment creates a dedicated NATS Stream to store async invocation messages. On startup the queue-worker will create this stream if it does not exist. To reset Keep in mind that all queued invocations will be lost.
The stream is automatically created by the queue-worker of it does not exist. Simply deleting the stream and restarting the queue-worker will reset the stream and consumer.
Keep in mind that deleting the stream removes any queued async invocations.
# This example deletes strean for the OpenFaaS default queue named faas-request.# The stream name is always the queue name prefixed with OF_.# Replace the queue name with the name of the queue stream you want to delete.exportQUEUE_NAME=faas-request
natsstreamdeleteOF_$QUEUE_NAME
NATS JetStream is a highly available, scale-out messaging system. It is used in OpenFaaS Pro for queueing asynchronous invocations.
The OpenFaaS Helm chart installs NATS by default, but the included configuration is designed only for development or non-critical internal use.
By default, NATS runs with a single replica and no persistent storage. This means:
If the NATS Pod goes down, all asynchronous OpenFaaS requests will fail until it comes back online.
Pending messages are not stored persistently, so they are lost if the Pod or node crashes.
All queued messages are lost during redeployments or when the NATS service restarts.
The default setup is often sufficient for development or staging, but it is not suitable for production. For reliable operation, production systems need additional configuration to provide high availability and data durability.
The next section describes our recommended configuration for a production-grade installation of OpenFaaS with an external NATS deployment.
Deploy NATS in your cluster using the official NATS Helm chart.
A minimum of three NATS servers is required for production. This ensures that message processing can continue even if one Pod or node fails.
The Helm chart deploys NATS as a StatefulSet. To avoid a single point of failure it is recommended to schedule each Pod on a different Kubernetes node.
At the networking level, the clustering in NATS uses Gossip to announce basic cluster membership, and Raft to store and track messages. If you experience issues, check your network policies and security groups allow for these protocols.
Add the following to the values.yaml configuration for your NATS deployment:
Persistent Volumes (PV) is how Kubernetes adds stateful storage to containers. Applications define a Persistent Volume Claim (PVC), which is fulfilled by a storage engine, resulting in storage being provisioned, and then attached to the container as a PV.
The NATS Helm chart enables file storage for JetStream by default. The PV can be configured through a template in the NATS Helm chart.
The size value will depend on your own usage. We suggest you start with a generous estimate, then monitor actual usage.
If not specified the default storage class will be used. You can use a different storage class by setting the config.jetstream.fileStore.pvc.storageClassName parameter.
When running on-premises, if you do not have a storage driver, you can try Longhorn from the CNCF.