By default, the template will automatically attempt to set the correct Content-Type header for you based on the type of response.
For example, returning a dict object type will automatically attach the header Content-Type: application/json and returning a string type will automatically attach the Content-Type: text/html, charset=utf-8 for you.
This could be useful if you needed to serve up a static file or metadata to an external tool or service that is integrated with your functions. An example would be a HTTP readiness probe that checks the /healthz endpoint to see if a database is connected.
Accessing request method:
defhandle(event,context):ifevent.method=='GET':return{"statusCode":200,"body":"GET request"}else:return{"statusCode":405,"body":"Method not allowed"}
As explained in the introduction, only the Debian variant of the Python template is suitable for building native dependencies. Why? For one, many libraries are available as pre-compiled wheels, meaning they can be imported without any compilation. Secondly, Alpine Linux requires so many packages to be added to build code that it becomes larger than the Debian base. Thirdly, Alpine Linux is not compatible with many native libraries because it uses its own C library called musl.
If a pre-compiled wheel isn't available for your chosen package, then you can use a build option to add a build toolchain. Build options are an abstracted list of packages to install, grouped together.
The current list of build_options for the Debian-based template is available in the templates repository in the template.yml file. Pull requests and contributions are welcome, however packages can be specified even when they are not present as a build option.
Alternatively, individual packages within apt can be specified through build_args:
CREATEDATABASEmain;\cmain;CREATETABLEusers(nameTEXT,);-- Insert the original Postgresql author's name into the test table:INSERTINTOusers(name)VALUES('Michael Stonebraker');
handler.py:
importpsycopg2defhandle(event,context):try:conn=psycopg2.connect("dbname='main' user='postgres' port=5432 host='192.168.1.35' password='passwd'")exceptExceptionase:print("DB error {}".format(e))return{"statusCode":500,"body":e}cur=conn.cursor()cur.execute("""SELECT * from users;""")rows=cur.fetchall()return{"statusCode":200,"body":rows}
Always read the secret from an OpenFaaS secret at /var/openfaas/secrets/secret-name. The use of environment variables is an anti-pattern and will be visible via the OpenFaaS API.
To authenticate a function with a pre-shared secret, or API token, first create a secret, bind that secret to the function, then read it at runtime and validate it.
The base images for the official OpenFaaS templates come from the Docker Hub, these images are built with automation and should always have the latest apk or apt packages installed.
That said, if you need to upgrade the images sooner, or are using an older image that was mirrored from the Docker Hub, you can add a --build-arg flag or build_args: entry in stack.yaml to force an upgrade on each build.
A common use-case for static files is when you want to serve HTML, lookup information from a JSON manifest or render some kind of templates.
With the python templates, static files and folders can just be added to the handler directory and will be copied into the function image.
To read a file e.g data.json back at runtime you can do the following:
defhandle(event,context):ifevent.path=="/static":# Get the directory where this handler.py file is locatedcurrent_dir=os.path.dirname(os.path.abspath(__file__))data_file_path=os.path.join(current_dir,'data.json')# Read the data.json filewithopen(data_file_path,'r')asfile:data=json.load(file)return{"statusCode":200,"body":json.dumps(data),"headers":{"Content-Type":"application/json"}}else:return{"statusCode":200,"body":"Hello from OpenFaaS!"}
Fork the template repository and modify the template. Recommended method that allows for distribution and reuse of the template.
Pull the template and apply patches directly in the ./template/<language_name> directory. Good for quick iteration and experimentation with template modifications. The modified template can not be shared and reused. Changes may get overwritten when pulling templates again.
Add the required packages for auto instrumentation to the requirements.txt file of the template:
opentelemetry-distro
opentelemetry-exporter-otlp
Update the Dockerfile to run the bootstrap command after the the template and function packages have been installed:
# Build the function directory and install any user-specified components
USER app
RUN mkdir -p function
RUN touch ./function/__init__.py
WORKDIR /home/app/function/
COPY --chown=app:app function/requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt
+ RUN opentelemetry-bootstrap -a install
The opentelemetry-bootstrap -a install command reads through the list of packages installed in your active site-packages folder, and installs the corresponding instrumentation libraries for these packages, if applicable. The OpenTelemetry Python agent uses monkey patching to modify functions in these libraries at runtime.
Update the fprocess ENV in the Dockerfile to start the OpenTelemetry agent:
# configure WSGI server and healthcheck
USER app
- ENV fprocess="python index.py"+ ENV fprocess="opentelemetry-instrument python index.py"
Use your modified template to create a new function.
The OpenTelemetry agent can be configured using environment variables on the function:
OTEL_SERVICE_NAME sets the name of the service associated with the telemetry and is used to identify telemetry for a specific function. It can be set to any value you want be we recommend using the clear function identifier <fn-name>.<fn-namespace>.
OTEL_TRACES_EXPORTER specifies which tracer exporter to use. In this example traces are exported to console (stdout) and with otlp. The otlp option tells opentelemetry-instrument to send the traces to an endpoint that accepts OTLP via gRPC.
setting OTEL_METRICS_EXPORTER and OTEL_LOGS_EXPORTER to none we disable the metrics and logs exporters. You can enable them if desired.
OTEL_EXPORTER_OTLP_ENDPOINT sets the endpoint where telemetry is exported to.