The OpenFaaS design allows it to provide a standard API across several different container orchestration tools: Kubernetes, containerd, and others. These faas-providers generally implement the same core features and allow your to functions to remain portable and be deployed on any certified OpenFaaS installation regardless of the orchestration layer. However, there are certain workloads or deployments that require more advanced features or fine tuning of configuration. To allow maximum flexibility without overloading the OpenFaaS function configuration, we have introduced the concept of Profiles. This is simply a reserved function annotation that the faas-provider can detect and use to apply the advanced configuration.
In some cases, there may be a 1:1 mapping between Profiles and Functions, this is to be expected for TopologySpreadConstraints, Affinity rules. We see no issue with performance or scalability.
In other cases, one Profile may serve more than one function, such as when using a toleration or a runtime class.
Multiple Profiles can be composed together for functions, if required.
Note: The general design is inspired by StorageClasses and IngressClasses in Kubernetes. If you are familiar with Kubernetes, these comparisons may be helpful, but they are not required to understand Profiles in OpenFaaS.
Profiles must be pre-created, similar to Secrets, usually by the cluster admin. The OpenFaaS API does not provide a way to create Profiles because they are hyper specific to the orchestration tool.
When installing OpenFaaS on Kubernetes, Profiles use a CRD. This must be installed during or prior to start the OpenFaaS controller. When using the official Helm chart this will happen automatically. Alternatively, you can apply this YAML to install the CRD.
Profiles in Kubernetes work by injecting the supplied configuration directly into the correct locations of the Function's Deployment. This allows us to directly expose the underlying API without any additional modifications. Currently, it exposes the following Pod and Container options from the Kubernetes API.
This example requires OpenFaaS for Enterprises with (faas-netes:0.5.65 or higher) and is aimed at securing enterprise and multi-tenant workloads.
Pod Security Standards were introduced in K8s v1.25 and are a set of best practices for securing your Pods. The restricted profile is the most secure option.
The below example deploys a function which will pass the restricted Pod Security Standard.
It defines:
A new namespace for functions called restricted-fn, which has been labeled with pod-security.kubernetes.io/enforce: restricted
A new Profile called restricted which sets the Pod Security Context to use RuntimeDefault and runAsNonRoot: true - any name can be used, or you could update an existing Profile that you're already using
A function called env which uses the restricted Profile
---# Namespace "restricted-fn"apiVersion:v1kind:Namespacemetadata:name:restricted-fnlabels:kubernetes.io/metadata.name:devpod-security.kubernetes.io/enforce:restrictedannotations:openfaas:"1"---# Profile "restricted"apiVersion:openfaas.com/v1kind:Profilemetadata:name:restrictednamespace:openfaasspec:podSecurityContext:seccompProfile:type:RuntimeDefaultrunAsNonRoot:true---# Function "restricted-fn"apiVersion:openfaas.com/v1kind:Functionmetadata:name:envnamespace:restricted-fnspec:name:envimage:ghcr.io/openfaas/alpine:latestenvironment:fprocess:"env"annotations:com.openfaas.profile:restricted
The securityContext for the container is not exposed as a separate configuration item since all required values (apart from capabilities are set at the Pod level instead.
By default, OpenFaaS for Enterprises will drop all Linux capabilities from the container. This is a requirement of the restricted Pod Security Standard.
The following will be added to the container's securityContext:
A popular alternative container runtime class is gVisor that provides additional sandboxing between containers. If you have created a cluster that is using gVisor, you will need to set the runTimeClass on the Pods that are created. This is not exposed in the OpenFaaS API, but it can be set via a Profile.
Set an elevated Pod priority with priorityClassName¶
In some cases, you may want to set a higher priority for certain functions to ensure they are scheduled first, or evicted last by the scheduler. This can be done by setting the priorityClassName in a Profile.
Specify a nodeSelector to schedule functions to specific nodes¶
This example works for OpenFaaS Standard and OpenFaaS for Enterprises only, but you should consider using TopologySpreadConstraints or Affinity rules instead, which are more versatile.
"nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. Kubernetes only schedules the Pod onto nodes that have each of the labels you specify."
$ kubectl get deploy -o wide -n openfaas-fn
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
customer1-env 1/1 1 1 18s customer1-env ghcr.io/openfaas/alpine:latest faas_function=customer1-env
customer2-env 1/1 1 1 18s customer2-env ghcr.io/openfaas/alpine:latest faas_function=customer2-env
This will also work if you have several nodes dedicated to a particular customer, just apply the label to each node and add the constraint at deployment time.
You may also want to consider using a taint and toleration to ensure OpenFaaS workload components do not get scheduled to these nodes.
Spreading your functions out across different zones for High Availability¶
The topologySpreadConstraints feature of Kubernetes provides a more flexible alternative to Pod Affinity / Anti-Affinity rules for scheduling functions.
"You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization."
Imagine a cluster with two nodes, each in a different availability zone.
The constraint of whenUnsatisfiable: DoNotSchedule will mean pods are not scheduled if they cannot be balanced evenly. This may become an issue for you if your nodes are of difference sizes, therefore you may also want to consider changing this value to ScheduleAnyway
Use Tolerations and Affinity to Separate Workloads¶
This example is for OpenFaaS Pro because it uses Affinity.
While the OpenFaaS API exposes the Kubernetes NodeSelector via constraints, affinity/anti-affinity and taint/tolerations can be used to further expand the types of constraints you can express. OpenFaaS Profiles allow you to set these options. They allow you to more accurately isolate workloads, keep certain workloads together on the same nodes, or to keep certain workloads separate.
For example, a mixture of taints and affinity can put less critical functions on preemptable vms that are cheaper while keeping critical functions on standard nodes with higher availability guarantees.
In this example, we create a Profile using taints and affinity to place functions on the node with NVME storage. We will also ensure that only functions that require NVME are scheduled on these nodes. This ensures that the functions that need to faster storage are not blocked by other standard functions taking resources on these special nodes.
Install the latest faas-netes release and the CRD. The is most easily done with arkade
There are cases when you might want to set a custom DNS configuration per function instead of using the cluster level DNS settings. For example if you are building a multi tenant functions platform and need different DNS configuration for functions from different tenants. Profiles support setting the dnsPolicy and dnsConfig for a function pod.
Create a profile with a custom DNS configuration.
In this example we configure custom nameservers.
You will have to make sure GPU nodes in your cluster are set up with GPU drivers and run the corresponding device plugin from the GPU vendor.
See the kubernetes documentation for detailed information on scheduling GPUs
Once you have installed the plugin, your cluster exposes a custom schedulable resource such as amd.com/gpu or nvidia.com/gpu. These are not exposed through the resources in the OpenFaaS Function spec but can be applied using Profiles.
Here's an example of a Profile that requests one NVIDIA GPU for a function:
Note: runtimeClass also needs to be set to use the relevant container runtime if your cluster has multiple runtimes.
Add this profile to the cluster and use the com.openfaas.profile annotation to apply the profile to functions that need access to a GPU:
com.openfaas.profile: nvidia-gpu
With the default RollingUpdate strategy, updating a function is not possible if all GPUs are in use.
Kubernetes will try to create a new function pod before shutting down the old one but the newly created pod can not start because no more free GPUs are available.
You may want to consider switching the update strategy for functions using GPU to Recreate if you plan on using all available GPUs in your cluster. Keep in mind that this may cause the function to be offline for a moment during updates.
The update strategy type for function deployments can be added to the profile:
You might want to set default memory and CPU resources for all your functions. This can be done by creating a Profile and applying it to all your functions by default.
Example of a profile that sets Memory/CPU limits and requests:
Add the profiles annotation to all functions to apply this profile.
com.openfaas.profile:default-resources
It is still possible to override the default settings on a per function basis by setting different values in the function stack.yaml: see Memory/CPU limits. Resources set in the function spec take precedence over resources set through Profiles.