And integrate with a health-check mechanism with Kubernetes:
implement /_/health as a HTTP endpoint
implement /_/ready as a HTTP endpoint
The path for the HTTP ready and health endpoints can be overridden with annotations (see below)
If running in read-only mode, then you can write files to the /tmp/ mount only. These files may be accessible upon subsequent requests but it is not guaranteed. For instance - if you have two replicas of a function then both may have different contents in their /tmp/ mount. When running without read-only mode you can write files to the user's home directory subject to the same rules.
To build a function simply use the OpenFaaS CLI to scaffold a new function using one of the official templates or one of your own templates. All FaaS Functions make use of the OpenFaaS classic watchdog or the next-gen of-watchdog.
faas-cli template pull
faas-cli new --list
Or build your own templates Git repository and then pass that as an argument to faas-cli template pull
faas-cli template pull https://github.com/my-org/templates
faas-cli new --list
Custom binaries can also be used as a function. Just use the dockerfile language template and replace the fprocess variable with the command you want to run per request. If you would like to pipe arguments to a CLI utility you can prefix the command with xargs.
If you can change the code in your application, you'll need to add a health and readiness endpoint, along with changing its HTTP port to listen on 8080, then you can deploy it directly to OpenFaaS.
Let's assume you cannot change any code, and have a Node.js application that listens to traffic on port 3000. We can use the OpenFaaS of-watchdog in HTTP mode to proxy traffic to the process and to provide health checks.
You can view its Dockerfile and code at: alexellis/expressjs-k8s and the image is published to the Docker Hub at: alexellis2/service:0.3.6
Start by creating a new folder:
mkdir-pnode-service/
Write a custom Dockerfile ./node-service/Dockerfile:
# Import the OpenFaaS of-watchdogFROMghcr.io/openfaas/of-watchdog:0.9.16aswatchdog# Add a FROM line from your existing imageFROMalexellis2/service:0.4.1# Let's say that the image listens on port 3000 and # that we can't change that easilyENVhttp_port3000# Install the watchdog from the base imageCOPY--from=watchdog/fwatchdog/usr/bin/
# Now set the watchdog as the start-up process# Along with the HTTP mode, and an upstream URL to # where your HTTP server will be running from the original# image.ENVmode="http"ENVupstream_url="http://127.0.0.1:3000"# Set fprocess to the value you have in your base imageENVfprocess="node index.js"CMD["fwatchdog"]
Now create a stack.yml at the root directory ./stack.yml:
Custom TerminationGracePeriod for long running functions¶
OpenFaaS Pro feature
This feature is part of the OpenFaaS Pro distribution.
You can configure your functions to drain any requests in flight when scaling down. This prevents errors and makes sure all work is processed, before Kubernetes finally removes any Pods.
To set a custom TerminationGracePeriod for a function, configure a write_timeout environment variable.
When scaling down the function after scaling up, or scaling to zero, Kubernetes will wait for 1m before removing the function. If there is no work to be done, it could exit sooner because the OpenFaaS watchdog does a safe shutdown.
A stateless microservice can be built using the dockerfile language type and the OpenFaaS CLI - or by building a custom Docker image which serves traffic on port 8080 and deploying that via the RESTful API, CLI or UI.
An example of a stateless microservice may be an Express.js application using Node.js, a Sinatra app with Ruby or an ASP.NET 2.0 Core website.
Use of the OpenFaaS next-gen of-watchdog is optional, but recommended for stateless microservices to provide a consistent experience for timeouts, logging and configuration.
On Kubernetes is possible to run any container image as an OpenFaaS function as long as your application exposes port 8080 and has a HTTP health check endpoint.
This feature is part of the OpenFaaS Pro distribution.
Liveness and readiness probes can be set globally for the installation: OpenFaaS chart reference.
Annotations can be used to configure probes on a per function basis. Any overrides set in annotations will take precedence over the global configuration.
You can specify the HTTP path of your health check and control the behavior of the probe with the following annotations:
com.openfaas.health.http.path
com.openfaas.health.http.initialDelaySeconds
com.openfaas.health.http.periodSeconds
com.openfaas.health.http.timeoutSeconds
com.openfaas.health.http.failureThreshold
Readiness probes use the same HTTP path as the health check by default. The path, and other probing fields can be configured with these annotations:
com.openfaas.ready.http.path
com.openfaas.ready.http.initialDelaySeconds
com.openfaas.ready.http.periodSeconds
com.openfaas.ready.http.timeoutSeconds
com.openfaas.ready.http.successThreshold
com.openfaas.ready.http.failureThreshold
For example, you may have a function that takes 30s to initialise, but then only needs to be checked every 5s after that.
This feature is part of the OpenFaaS Pro distribution.
OpenFaaS exposes some information to functions through environment variables.
The function name is made available in every function as an environment variable OPENFAAS_NAME. You can test this by deploying the env function the store and invoking it, i.e.
OpenFaaS supports workloads over HTTP, and most standard content types are supported.
Since OpenFaaS has no hard limit on function execution duration, it allows for maintaining long-lived connections for streaming over HTTP.
Important: Always ensure OpenFaaS system and function timeouts are configured appropriately for your streaming workloads. See Extended timeouts for details.
Supported streaming options:
Server-Sent Events (SSE)
Server-Sent Events enable a function to push one-way event streams to a client.
Clients should include an Accept: text/event-stream header in their request when starting tht SSE request.
The function's response Content-Type header must be set to text/event-stream. Each event data chunk should be prefixed with data: and terminated by two newline characters (\n\n).
NDJSON (or JSON Lines) is a format for streaming multiple independent JSON objects, each on a new line.
Clients should include an Accept: application/x-ndjson header in their request.
The function's response Content-Type header should be set to application/x-ndjson. Each line in the response should be a complete JSON object followed by a newline character (\n).