Skip to content

Troubleshooting guide

Asynchronous functions

Any function can be invoked asynchronously by changing the route on the gateway from /function/<name> to /async-function/<name>. A 202 Accepted message will be issued in response to asynchronous calls.

If you would like to receive a value from an asynchronous call you should pass a HTTP header with the URL to be used for the call-back.

$ faas invoke figlet --Header "X-Callback-Url=https://request.bin/mybin"

Alternatively you can specify another asynchronous or synchronous function to run instead.

  • Parallelism

By default there is one queue-worker replica deployed which is set up to run a single task of up to 30 seconds in duration with one task in parallel. You can increase the parallelism by scaling the queue-worker up - i.e. 5 replicas for 5 parallel tasks.

You can tune the values for the number of tasks each queue worker may run in parallel as well as the maximum duration of any asynchronous task that worker processes. Edit the Kubernetes helm chart, YAML or Swarm docker-compose.yml files.

The OpenFaaS workshop has more instructions on running tasks asynchronously.

  • Verbose Output

The Queue Worker component enables asynchronous processing of function requests. The default verbosity level hides the message content, but this can be viewed by setting write_debug to true when deploying.

Timeouts

Default timeouts are configured at the HTTP level and must be set both on the gateway and the function.

Note: all distributed systems need a maximum timeout value to be configured for work. This means that work cannot be unbounded.

Timeouts - Your function

You can also enforce a hard-timeout for your function with the hard_timeout environmental variable.

For watchdog configuration see the README.

The best way to set the timeout is in the YAML file generated by the faas-cli.

Example Go app that sleeps for (10 seconds):

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  sleepygo:
    lang: go
    handler: ./sleepygo
    image: alexellis2/sleeps-for-10-seconds
    environment:
        read_timeout: 20s
        write_timeout: 20s

handler.go

package function

...

func Handle(req []byte) string {
    time.Sleep(time.Second * 10)
    return fmt.Sprintf("Hello, Go. You said: %s", string(req))
}

Timeouts - Gateway

For the gateway set the following environmental variables:

            read_timeout:  "25s"        # Maximum time to read HTTP request
            write_timeout: "25s"        # Maximum time to write HTTP response
            upstream_timeout: "20s"     # Maximum duration of upstream function call

Note: The value for upstream_timeout should be slightly less than read_timeout and write_timeout

Timeouts - Function provider

When using a gateway version older than 0.7.8 a timeout matching the gateway should be set for the faas-swarm or faas-netes controller.

read_timeout: 25s
write_timeout: 25s

Timeouts - Asynchronous invocations

For asynchronous invocations of functions a separate timeout can be configured at the queue-worker level in the ack_wait environmental variable.

If the ack_wait is exceeded the task will not be acknowledge and the queue system will retry the invocation.

Function execution logs

By default the functions will not log out the result, but just show how long the process took to run and the length of the result in bytes.

$ echo test this | faas invoke json-hook -g 127.0.0.1:31112
Received JSON webook. Elements: 10

$ kubectl logs deploy/json-hook -n openfaas-fn
2018/01/28 20:47:21 Writing lock-file to: /tmp/.lock
2018/01/28 20:47:27 Forking fprocess.
2018/01/28 20:47:27 Wrote 35 Bytes - Duration: 0.001844 seconds

If you want to see the result of a function in the function's logs then deploy it with the write_debug environmental variable set to true.

For example:

provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  json-hook:
    lang: go
    handler: ./json-hook
    image: json-hook
    environment:
      write_debug: true

Now you'll see logs like this:

$ echo test this | faas invoke json-hook -g 127.0.0.1:31112
Received JSON webook. Elements: 10

$ kubectl logs deploy/json-hook -n openfaas-fn
2018/01/28 20:50:27 Writing lock-file to: /tmp/.lock
2018/01/28 20:50:35 Forking fprocess.
2018/01/28 20:50:35 Query
2018/01/28 20:50:35 Path  /function/json-hook
Received JSON webook. Elements: 10
2018/01/28 20:50:35 Duration: 0.001857 seconds

You can then find the logs of the function using Docker Swarm or Kubernetes as listed in the section below.

Healthcheck

Most problems reported via GitHub or Slack stem from a configuration problem or issue with a function. Here is a checklist of things you can try before digging deeper:

Checklist:

  • All core services are deployed: i.e. gateway
  • Check functions are deployed and started
  • Check request isn't timing out at the gateway or the function level

CLI unresponsive - 127.0.0.1 vs. localhost

On certain Linux distributions the name localhost maps to an IPv6 alias meaning that the CLI may hang. In these circumstances you have two options:

  1. Use the -g or --gateway argument with 127.0.0.1:8080 or similar

  2. Set the OPENFAAS_URL environmental variable to 127.0.0.1:8080 or similar

  3. Edit the /etc/hosts file on your machine and remove the IPv6 alias for localhost (this forces the use of IPv4)

Uninstall OpenFaaS

If you'd like to uninstall or remove OpenFaaS from a host follow the steps below.

CLI

If you'd like to remove the CLI and you installed it with brew, then use brew to remove it.

If you installed via the curl/sh utility script:

  • Run rm -rf /usr/local/bin/faas-cli
  • Delete saved gateway login details: rm -rf ~/.openfaas

Swarm

Remove any functions you deployed:

$ docker service ls --filter="label=function" -q | xargs docker service rm

Remove the whole stack

$ docker stack rm func

Kubernetes

If deployed via Helm:

helm delete --purge openfaas

If installed via YAML files:

kubectl delete namespace openfaas,openfaas-fn

Troubleshooting Swarm or Kubernetes

Troubleshoot Docker Swarm

List all functions

$ docker service ls

You are looking for 1/1 for the replica count of each service listed.

Find a function's logs

$ docker service logs --tail 100 FUNCTION

Find out if a function failed to start

$ docker service ps --no-trunc=true FUNCTION

I forgot my gateway password

If you've logged into the OpenFaaS CLI then you can retrieve the credentials from config.yaml in ~/.openfaas/. Use the value from the token field such as: echo -n HASHED_VALUE | base64 -D/-d to view the contents in plain-text. If you don't have access to bash or the base64 utility then type in docker run -ti alpine:3.7 to run a shell in Docker.

If you never logged in via the CLI then you can retrieve the contents from the cluster secret store:

Swarm

Use the jaas task-runner for Swarm (easiest option):

$ docker run -ti -v /var/run/docker.sock:/var/run/docker.sock \
  alexellis2/jaas:1.0.0 \
  run --secret basic-auth-password \
  --image alpine:3.7 \
  --command "cat /run/secrets/basic-auth-password"

Printing service logs
2018-08-28T07:50:46.431268693Z  21f596c9cd75a0fe5e335fb743995d18399e83418a37a79e719576a724efbbb6
  • Or use a one-shot Docker Service:
$ docker service rm print-password \
 ; docker service create --detach --secret basic-auth-password \
   --restart-condition=none --name=print-password \
   alpine:3.7 cat /run/secrets/basic-auth-password

$ docker service logs print-password
print-password.1.59bwe0bb4d99@nuc    | 21f596c9cd75a0fe5e335fb743995d18399e83418a37a79e719576a724efbbb6

Troubleshoot Kubernetes

If you have deployed OpenFaaS to the recommended namespaces then functions are in the openfaas-fn namespace and the core services are in the openfaas namespace. The -n flag to kubectl sets the namespace to look at.

List OpenFaaS services

$ kubectl get deploy -n openfaas

List all functions

$ kubectl get deploy -n openfaas-fn

Find a function's logs

$ kubectl logs -n openfaas-fn deploy/FUNCTION_NAME

Find out if a function failed to start

$ kubectl describe -n openfaas-fn deploy/FUNCTION_NAME
$ kubectl get events --sort-by=.metadata.creationTimestamp -n openfaas-fn

Check logs of the core services

Check for any relevant events:

$ kubectl get events --sort-by=.metadata.creationTimestamp -n openfaas

These instructions may differ depending on whether you are using faas-netes (default) or the OpenFaaS Operator

Get logs using faas-netes
$ kubectl logs -n openfaas deploy/gateway -c faas-netes
$ kubectl logs -n openfaas deploy/gateway -c gateway
Check the queue-worker
$ kubectl logs -n openfaas-fn deploy/queue-worker
Get logs using OpenFaaS Operator
$ kubectl logs -n openfaas deploy/gateway -c operator
$ kubectl logs -n openfaas deploy/gateway -c gateway

I forgot my gateway password

Use the following to print the secret on the terminal:

echo $(kubectl get secret -n openfaas gateway-basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

If you installed OpenFaaS into a custom namespace then change the value -n openfaas to -n custom-ns.

Watchdog

Debug your function without deploying it

Here's an example of how you can deploy a function without using an orchestrator and the API gateway. It is especially useful for testing:

$ docker run --name debug-alpine \
  -p 8081:8080 -ti functions/alpine:latest sh
# fprocess=date fwatchdog &

Now you can access the function with one of the supported HTTP methods such as GET/POST etc:

$ curl -4 127.0.0.1:8081

Edit your function without rebuilding it

You can bind-mount code straight into your function and work with it locally, until you are ready to re-build. This is a common flow with containers, but should be used sparingly.

Within the CLI directory for instance:

Build the samples:

$ git clone https://github.com/openfaas/faas-cli && \
  cd faas-cli
$ faas-cli -action build -f ./samples.yml

Now work with the Python-hello sample, with the code mounted live:

$ docker run -v `pwd`/sample/url-ping/:/root/function/ \
  --name debug-alpine -p 8081:8080 -ti alexellis/faas-url-ping sh
$ touch ./function/__init__.py
# fwatchdog

Now you can start editing the code in the sample/url-ping folder and it will reload live for every request.

$ curl 127.0.0.1:8081 -d "https://www.google.com"
Handle this -> https://www.google.com
https://www.google.com => 200

Now you can edit handler.py and you'll see the change immediately:

$ echo "def handle(req):" > sample/url-ping/handler.py
$ echo '    print("Nothing to see here")' >> sample/url-ping/handler.py
$ curl 127.0.0.1:8081 -d "https://www.google.com"
Nothing to see here