prefect.yaml file is a YAML file describing base settings for your deployments, procedural steps for preparing deployments,
and instructions for preparing the execution environment for a deployment run.
Initialize your deployment configuration, which creates the prefect.yaml file, with the CLI command prefect init
in any directory or repository that stores your flow code.
The prefect.yaml file contains:
- deployment configuration for deployments created from this file
- default instructions for how to build and push any necessary code artifacts (such as Docker images)
- default instructions for pulling a deployment in remote execution environments (for example, cloning a GitHub repository).
prefect deploy CLI command when creating a deployment.
The base structure for prefect.yaml looks like this:
prefect deploy without altering the deployments section of your
prefect.yaml file. The prefect deploy command helps in deployment creation through interactive prompts. The prefect.yaml
file facilitates version-controlling your deployment configuration and managing multiple deployments.
Deployment actions
Deployment actions defined in yourprefect.yaml file control the lifecycle of the creation and execution of your deployments.
The three actions available are build, push, and pull.
pull is the only required deployment action. It defines how Prefect pulls your deployment in remote execution
environments.
Each action is defined as a list of steps executed in sequence. Each step has the following format:
requires field. Prefect uses this to auto-install if the step is not
found in the current environment. Each step can specify an id to reference outputs in
future steps. The additional fields map directly to Python keyword arguments to the step function. Within a given section,
steps always run in their order within the prefect.yaml file.
The build action
Use the build section ofprefect.yaml to specify setup steps or dependencies,
(like creating a Docker image), required to run your deployments.
If you initialize with the Docker recipe, you are prompted to provide
required information, such as image name and tag:
prefect.yaml as template values.
We recommend using a templated {{ image }} within prefect.yaml (specifically in the work pool’s job_variables section).
By avoiding hardcoded values, the build step and deployment specification won’t have mismatched values.
Some steps require Prefect integrationsIn the build step example above, you relied on the
prefect-docker package; in cases that deal with external services,
additional required packages are auto-installed for you.run_shell_script step and feed the output into the build_docker_image step:
id field is used in the run_shell_script step to reference its output in the next step.
The push action
The push section is most critical for situations where code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket (for example, S3, GCS, Azure Blobs). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations. For example, a user who stores their code in an S3 bucket and relies on default worker settings for its runtime environment could use thes3 recipe:
prefect.yaml file, you should find that the push and pull sections have been templated out
as follows:
--field flag); note that the
folder property of the pull step is a template - the push_to_s3 step outputs both a bucket value as well as a folder
value for the template downstream steps. This helps you keep your steps consistent across edits.
As discussed above, if you use blocks, you can template the credentials section with
a block reference for secure and dynamic credentials access:
prefect deploy, this push section executes upon successful completion of your build section.
The pull action
The pull section is the most important section within theprefect.yaml file. It contains instructions for preparing your
flows for a deployment run. These instructions execute each time a deployment in this folder is run through a worker.
There are three main types of steps that typically show up in a pull section:
set_working_directory: this step sets the working directory for the process prior to importing your flowgit_clone: this step clones the provided repository on the provided branchpull_from_{cloud}: this step pulls the working directory from a Cloud storage location (for example, S3)
GitHubCredentials block to clone a private GitHub repository:
BitBucketCredentials or GitLabCredentials block to clone from Bitbucket or GitLab. In
lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the ‘access_token` field.
Use a Secret block to do this securely:
Utility steps
Use utility steps within a build, push, or pull action to assist in managing the deployment lifecycle:run_shell_scriptallows for the execution of one or more shell commands in a subprocess, and returns the standard output and standard error of the script. This step is useful for scripts that require execution in a specific environment, or those which have specific input and output requirements. Note that settingstream_output: trueforrun_shell_scriptwrites the output and error to stdout in the execution environment, which will not be sent to the Prefect API.
pip_install_requirementsinstalls dependencies from arequirements.txtfile within a specified directory.
requirements.txt file after cloning:
retrieve_secrets is a custom python module packaged
into the default working directory of a Docker image (which is /opt/prefect by default).
main is the function entry point, which returns an access token (for example, return {"access_token": access_token}) like the
preceding example, but utilizing the Azure Python SDK for retrieval.
Custom deployment steps
Deployment steps inprefect.yaml are not limited to built-in Prefect utilities. Every step is simply
a reference to a Python function, specified by its fully-qualified name. This means you can write your
own custom step functions and use them in any build, push, or pull section.
How steps work
When Prefect encounters a step likemy_module.my_function, it imports and calls that function with
the provided keyword arguments. As long as the function is importable in the execution environment, it
works as a deployment step. Custom step functions should accept keyword arguments and return a
dictionary (which can be empty) so that their outputs are available to subsequent steps through
templating.
Writing a custom step function
Here is an example of a custom step that loads environment variables from a.env file before a flow
run executes:
my_steps.py
prefect.yaml like any other step:
Step function requirementsCustom step functions must:
- Be importable in the execution environment (the module must be installed or available on
sys.pathwhere the worker runs). - Accept their configuration as keyword arguments.
- Return a dictionary. The returned key-value pairs become available as template variables for
subsequent steps (for example,
"{{ step-id.key }}"). Return an empty dictionary if no output is needed.
Practical examples
Run a database migration before each deployment run:my_config/steps.py
Templating options
Values that you place within yourprefect.yaml file can reference dynamic values in several different ways:
- step outputs: every step of both
buildandpushproduce named fields such asimage_name; you can reference these fields withinprefect.yamlandprefect deploywill populate them with each call. References must be enclosed in double brackets and in"{{ field_name }}"format - blocks: you can reference Prefect blocks with the
{{ prefect.blocks.block_type.block_slug }}syntax. It is highly recommended that you use block references for any sensitive information (such as a GitHub access token or any credentials) to avoid hardcoding these values in plaintext - variables: you can reference Prefect variables with the
{{ prefect.variables.variable_name }}syntax. Use variables to reference non-sensitive, reusable pieces of information such as a default image name or a default work pool name. - environment variables: you can also reference environment variables with the special syntax
{{ $MY_ENV_VAR }}. This is especially useful for referencing environment variables that are set at runtime.
prefect.yaml file as an example:
build steps produce fields called image_name and tag, every time you deploy a new version of our deployment,
the {{ build-image.image }} variable is dynamically populated with the relevant values.
Docker stepThe most commonly used build step is
prefect_docker.deployments.steps.build_docker_image which produces both the image_name and tag fields.prefect.yaml file can have multiple deployment configurations that control the behavior of several deployments.
You can manage these deployments independently of one another, allowing you to deploy the same flow with different
configurations in the same codebase.
Work with multiple deployments with prefect.yaml
Prefect supports multiple deployment declarations within theprefect.yaml file. This method of declaring multiple
deployments supports version control for all deployments through a single command.
Add new deployment declarations to the prefect.yaml file with a new entry to the deployments list.
Each deployment declaration must have a unique name field to select deployment declarations when using the
prefect deploy command.
For example, consider the following prefect.yaml file:
name
field and can be deployed individually with the --name flag when deploying.
For example, to deploy deployment-1, run:
--name flags:
--all flag:
- all deployments from the flow
my-flow - all flows ending in
devwith a deployment namedmy-deployment - all deployments starting with
depand ending inprod.
Non-interactive deployment
For CI/CD pipelines and automated environments, use the--no-prompt flag to skip interactive prompts:
CLI Options When deploying multiple deploymentsWhen deploying more than one deployment with a single
prefect deploy command, any additional attributes provided are ignored.To provide overrides to a deployment through the CLI, you must deploy that deployment individually.Reuse configuration across deployments
Because aprefect.yaml file is a standard YAML file, you can use YAML aliases
to reuse configuration across deployments.
This capability allows multiple deployments to share the work pool configuration, deployment actions, or other
configurations.
Declare a YAML alias with the &{alias_name} syntax and insert that alias elsewhere in the file with the *{alias_name}
syntax. When aliasing YAML maps, you can override specific fields of the aliased map with the <<: *{alias_name} syntax and
adding additional fields below.
We recommend adding a definitions section to your prefect.yaml file at the same level as the deployments section to store your
aliases.
For example:
deployment-1anddeployment-2use the same work pool configurationdeployment-1anddeployment-3use the same scheduledeployment-1anddeployment-2use the same build deployment action, butdeployment-2overrides thedockerfilefield to use a custom Dockerfile
Deployment declaration reference
Deployment fields
These are fields you can add to each deployment declaration.| Property | Description |
|---|---|
name | The name to give to the created deployment. Used with the prefect deploy command to create or update specific deployments. |
version | An optional version for the deployment. |
tags | A list of strings to assign to the deployment as tags. |
description | An optional description for the deployment. |
schedule | An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. |
concurrency_limit | An optional deployment concurrency limit. Set to an integer for a simple limit, or use the Concurrency Limit Fields for additional options like collision strategy and grace period. |
triggers | An optional array of triggers to assign to the deployment |
entrypoint | Required path to the .py file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. In the format path/to/file.py:flow_function_name. |
parameters | Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. |
enforce_parameter_schema | Boolean flag that determines whether the API should validate the parameters passed to a flow run against the parameter schema generated for the deployed flow. |
work_pool | Information of where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section. |
Schedule fields
These are fields you can add to a deployment declaration’sschedule section.
| Property | Description |
|---|---|
interval | Time between flow runs. Accepts an integer (seconds), ISO 8601 duration string (e.g., PT10M, PT1H30M, P1D), or time format string (e.g., 1:30:00). Cannot use in conjunction with cron or rrule. |
anchor_date | Datetime string indicating the starting or “anchor” date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. Can only use with interval. |
timezone | String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. |
cron | A valid cron string. Cannot use in conjunction with interval or rrule. |
day_or | Boolean indicating how croniter handles day and day_of_week entries. Must use with cron. Defaults to True. |
rrule | String representation of an RRule schedule. See the rrulestr examples for syntax. Cannot used them in conjunction with interval or cron. |
Concurrency limit fields
Theconcurrency_limit field accepts either a simple integer or a section with additional options:
| Property | Description |
|---|---|
limit | The maximum number of concurrent flow runs for the deployment. |
collision_strategy | Configure the behavior for runs once the concurrency limit is reached. Options are ENQUEUE and CANCEL_NEW. Defaults to ENQUEUE. |
grace_period_seconds | The time in seconds to allow infrastructure to start before the concurrency slot is released. Must be between 60 and 86400 seconds. If not set, falls back to the server setting (default 300 seconds / 5 minutes). |
Work pool fields
These are fields you can add to a deployment declaration’swork_pool section.
| Property | Description |
|---|---|
name | The name of the work pool to schedule flow runs in for the deployment. |
work_queue_name | The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool is used. |
job_variables | Values used to override the default values in the specified work pool’s base job template. Maps directly to a created deployments infra_overrides attribute. |
Deployment mechanics
Anytime you runprefect deploy in a directory that contains a prefect.yaml file, the following actions take place in order:
- The
prefect.yamlfile load. First, thebuildsection loads and all variable and block references resolve. The steps then run in the order provided. - Next, the
pushsection loads and all variable and block references resolve; the steps within this section then run in the order provided. - Next, the
pullsection is templated with any step outputs but is not run. Block references are not hydrated for security purposes: they are always resolved at runtime. - Next, all variable and block references resolve with the deployment declaration. All flags provided through the
prefect deployCLI are then overlaid on the values loaded from the file. - The final step occurs when the fully realized deployment specification is registered with the Prefect API.
- The step’s inputs and block / variable references resolve.
- The step’s function is imported; if it cannot be found, the special
requireskeyword installs the necessary packages. - The step’s function is called with the resolved inputs.
- The step’s output is returned and used to resolve inputs for subsequent steps.
Update a deployment
To update a deployment, make any desired changes to theprefect.yaml file, and run prefect deploy. Running just this command will prompt you to select a deployment interactively, or you may specify the deployment to update with --name your-deployment.