This guide shows you how to monitor and improve the performance of your application running on Layer0.

Built-in Timings

All responses contain an x-0-t header that contains the time the request spent at each layer of the Layer0 stack.

Tracking your Own Timings

You can use the @layer0/core/timing module to track how long it takes parts of your code to execute. A common case is tracking how long it takes to fetch a result from an upstream API. For example:

import Timing from '@layer0/core/timing'

const timing = new Timing('api').start()

try {
  const result = await fetch(API_URL)
} finally {
  timing.end() // this will result in a `x-0-user-t: api=(millis)` response header
  • All timings are returned in an x-0-user-t response header.
  • The value is a comma-delimited list of pairs of the form (name)=(duration-in-millis).
  • The value of this header will be logged into xut field in access logs. The logged data is limited to 50 bytes after which it will be truncated.
  • Any timings that are not ended before the response is sent will have a value of na

Performance Optimizations

Turn off Caching When not Needed

For GET routes that you know you will not or must not cache, always explicitly disable caching. This indicates to Layer0 that it should not try to coalesce requests which leads to improved performance especially on slower upstreams.

For example, if you know that nothing from your legacy upstream will or can ever be cached, do this:

new Router().fallback(({ proxy, cache }) => {
    edge: false,